00:00:00.002 Started by upstream project "autotest-per-patch" build number 124186 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.087 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.097 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.108 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.108 > git config core.sparsecheckout # timeout=10 00:00:05.116 > git read-tree -mu HEAD # timeout=10 00:00:05.130 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.144 Commit message: "pool: fixes for VisualBuild class" 00:00:05.144 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:05.218 [Pipeline] Start of Pipeline 00:00:05.233 [Pipeline] library 00:00:05.235 Loading library shm_lib@master 00:00:05.235 Library shm_lib@master is cached. Copying from home. 00:00:05.254 [Pipeline] node 00:00:05.265 Running on WFP5 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.267 [Pipeline] { 00:00:05.280 [Pipeline] catchError 00:00:05.282 [Pipeline] { 00:00:05.296 [Pipeline] wrap 00:00:05.307 [Pipeline] { 00:00:05.316 [Pipeline] stage 00:00:05.318 [Pipeline] { (Prologue) 00:00:05.496 [Pipeline] sh 00:00:05.771 + logger -p user.info -t JENKINS-CI 00:00:05.788 [Pipeline] echo 00:00:05.789 Node: WFP5 00:00:05.796 [Pipeline] sh 00:00:06.089 [Pipeline] setCustomBuildProperty 00:00:06.097 [Pipeline] echo 00:00:06.098 Cleanup processes 00:00:06.101 [Pipeline] sh 00:00:06.376 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.376 626874 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.388 [Pipeline] sh 00:00:06.666 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.666 ++ grep -v 'sudo pgrep' 00:00:06.666 ++ awk '{print $1}' 00:00:06.666 + sudo kill -9 00:00:06.666 + true 00:00:06.681 [Pipeline] cleanWs 00:00:06.690 [WS-CLEANUP] Deleting project workspace... 00:00:06.690 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.696 [WS-CLEANUP] done 00:00:06.701 [Pipeline] setCustomBuildProperty 00:00:06.715 [Pipeline] sh 00:00:06.992 + sudo git config --global --replace-all safe.directory '*' 00:00:07.068 [Pipeline] nodesByLabel 00:00:07.069 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.077 [Pipeline] httpRequest 00:00:07.080 HttpMethod: GET 00:00:07.081 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.084 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.106 Response Code: HTTP/1.1 200 OK 00:00:07.107 Success: Status code 200 is in the accepted range: 200,404 00:00:07.107 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:14.452 [Pipeline] sh 00:00:14.733 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:14.751 [Pipeline] httpRequest 00:00:14.756 HttpMethod: GET 00:00:14.757 URL: http://10.211.164.101/packages/spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:00:14.758 Sending request to url: http://10.211.164.101/packages/spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:00:14.775 Response Code: HTTP/1.1 200 OK 00:00:14.776 Success: Status code 200 is in the accepted range: 200,404 00:00:14.776 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:01:06.851 [Pipeline] sh 00:01:07.133 + tar --no-same-owner -xf spdk_86abcfbbd67c7df8b6bcf1187d52e5b3aaa15ca9.tar.gz 00:01:09.677 [Pipeline] sh 00:01:09.958 + git -C spdk log --oneline -n5 00:01:09.958 86abcfbbd bdev_nvme: add debugging code to discovery path to debug issue #3401 00:01:09.958 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:01:09.958 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:01:09.958 f470a0dc6 event: do not call reactor events from spdk_thread context 00:01:09.958 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:01:09.970 [Pipeline] } 00:01:09.984 [Pipeline] // stage 00:01:09.993 [Pipeline] stage 00:01:09.995 [Pipeline] { (Prepare) 00:01:10.011 [Pipeline] writeFile 00:01:10.024 [Pipeline] sh 00:01:10.298 + logger -p user.info -t JENKINS-CI 00:01:10.309 [Pipeline] sh 00:01:10.589 + logger -p user.info -t JENKINS-CI 00:01:10.602 [Pipeline] sh 00:01:10.884 + cat autorun-spdk.conf 00:01:10.884 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.884 SPDK_TEST_NVMF=1 00:01:10.884 SPDK_TEST_NVME_CLI=1 00:01:10.884 SPDK_TEST_NVMF_NICS=mlx5 00:01:10.884 SPDK_RUN_UBSAN=1 00:01:10.884 NET_TYPE=phy 00:01:10.891 RUN_NIGHTLY=0 00:01:10.896 [Pipeline] readFile 00:01:10.920 [Pipeline] withEnv 00:01:10.922 [Pipeline] { 00:01:10.936 [Pipeline] sh 00:01:11.251 + set -ex 00:01:11.251 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:11.251 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:11.251 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.251 ++ SPDK_TEST_NVMF=1 00:01:11.251 ++ SPDK_TEST_NVME_CLI=1 00:01:11.251 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:11.251 ++ SPDK_RUN_UBSAN=1 00:01:11.251 ++ NET_TYPE=phy 00:01:11.251 ++ RUN_NIGHTLY=0 00:01:11.251 + case $SPDK_TEST_NVMF_NICS in 00:01:11.251 + DRIVERS=mlx5_ib 00:01:11.251 + [[ -n mlx5_ib ]] 00:01:11.251 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.251 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.819 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.819 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.819 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.819 + true 00:01:17.819 + for D in $DRIVERS 00:01:17.819 + sudo modprobe mlx5_ib 00:01:17.819 + exit 0 00:01:17.829 [Pipeline] } 00:01:17.849 [Pipeline] // withEnv 00:01:17.856 [Pipeline] } 00:01:17.874 [Pipeline] // stage 00:01:17.885 [Pipeline] catchError 00:01:17.887 [Pipeline] { 00:01:17.904 [Pipeline] timeout 00:01:17.904 Timeout set to expire in 40 min 00:01:17.906 [Pipeline] { 00:01:17.921 [Pipeline] stage 00:01:17.924 [Pipeline] { (Tests) 00:01:17.942 [Pipeline] sh 00:01:18.224 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:18.224 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:18.224 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:18.224 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:18.224 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:18.224 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:18.224 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:18.224 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:18.224 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:18.224 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:18.224 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:18.224 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:18.224 + source /etc/os-release 00:01:18.224 ++ NAME='Fedora Linux' 00:01:18.224 ++ VERSION='38 (Cloud Edition)' 00:01:18.224 ++ ID=fedora 00:01:18.224 ++ VERSION_ID=38 00:01:18.224 ++ VERSION_CODENAME= 00:01:18.224 ++ PLATFORM_ID=platform:f38 00:01:18.224 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.224 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.224 ++ LOGO=fedora-logo-icon 00:01:18.224 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.224 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.224 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.224 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.224 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.224 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.224 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.224 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.224 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.224 ++ SUPPORT_END=2024-05-14 00:01:18.225 ++ VARIANT='Cloud Edition' 00:01:18.225 ++ VARIANT_ID=cloud 00:01:18.225 + uname -a 00:01:18.225 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.225 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:20.760 Hugepages 00:01:20.760 node hugesize free / total 00:01:20.760 node0 1048576kB 0 / 0 00:01:20.760 node0 2048kB 0 / 0 00:01:20.760 node1 1048576kB 0 / 0 00:01:20.760 node1 2048kB 0 / 0 00:01:20.760 00:01:20.760 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.760 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:20.760 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:20.760 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:20.760 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:20.760 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:20.760 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:20.761 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:20.761 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:20.761 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:20.761 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:20.761 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:21.018 + rm -f /tmp/spdk-ld-path 00:01:21.018 + source autorun-spdk.conf 00:01:21.018 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.018 ++ SPDK_TEST_NVMF=1 00:01:21.018 ++ SPDK_TEST_NVME_CLI=1 00:01:21.018 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:21.018 ++ SPDK_RUN_UBSAN=1 00:01:21.018 ++ NET_TYPE=phy 00:01:21.018 ++ RUN_NIGHTLY=0 00:01:21.018 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.018 + [[ -n '' ]] 00:01:21.018 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:21.018 + for M in /var/spdk/build-*-manifest.txt 00:01:21.018 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.018 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:21.018 + for M in /var/spdk/build-*-manifest.txt 00:01:21.018 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.018 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:21.018 ++ uname 00:01:21.018 + [[ Linux == \L\i\n\u\x ]] 00:01:21.018 + sudo dmesg -T 00:01:21.018 + sudo dmesg --clear 00:01:21.018 + dmesg_pid=628419 00:01:21.018 + [[ Fedora Linux == FreeBSD ]] 00:01:21.018 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.018 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.018 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.018 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:21.018 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:21.018 + sudo dmesg -Tw 00:01:21.018 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.018 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.018 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.018 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.018 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.018 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.018 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.018 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.018 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.018 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.018 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.018 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:21.018 Test configuration: 00:01:21.018 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.018 SPDK_TEST_NVMF=1 00:01:21.018 SPDK_TEST_NVME_CLI=1 00:01:21.018 SPDK_TEST_NVMF_NICS=mlx5 00:01:21.018 SPDK_RUN_UBSAN=1 00:01:21.018 NET_TYPE=phy 00:01:21.018 RUN_NIGHTLY=0 22:52:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:21.018 22:52:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.018 22:52:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.018 22:52:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.019 22:52:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 22:52:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 22:52:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 22:52:13 -- paths/export.sh@5 -- $ export PATH 00:01:21.019 22:52:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.019 22:52:13 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:21.019 22:52:13 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:21.019 22:52:13 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717793533.XXXXXX 00:01:21.019 22:52:13 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717793533.0P0MMz 00:01:21.019 22:52:13 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:21.019 22:52:13 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:21.019 22:52:13 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:21.019 22:52:13 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.019 22:52:13 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.019 22:52:13 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:21.019 22:52:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:21.019 22:52:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.019 22:52:13 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:21.019 22:52:13 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:21.019 22:52:13 -- pm/common@17 -- $ local monitor 00:01:21.019 22:52:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.019 22:52:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.019 22:52:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.019 22:52:13 -- pm/common@21 -- $ date +%s 00:01:21.019 22:52:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.019 22:52:13 -- pm/common@21 -- $ date +%s 00:01:21.019 22:52:13 -- pm/common@21 -- $ date +%s 00:01:21.019 22:52:13 -- pm/common@25 -- $ sleep 1 00:01:21.019 22:52:13 -- pm/common@21 -- $ date +%s 00:01:21.019 22:52:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793533 00:01:21.019 22:52:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793533 00:01:21.019 22:52:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793533 00:01:21.019 22:52:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717793533 00:01:21.277 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793533_collect-vmstat.pm.log 00:01:21.277 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793533_collect-cpu-load.pm.log 00:01:21.277 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793533_collect-cpu-temp.pm.log 00:01:21.277 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717793533_collect-bmc-pm.bmc.pm.log 00:01:22.215 22:52:14 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:22.215 22:52:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.215 22:52:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.215 22:52:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:22.215 22:52:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.215 Fri Jun 7 08:52:14 PM UTC 2024 00:01:22.215 22:52:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.215 v24.09-pre-53-g86abcfbbd 00:01:22.215 22:52:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.215 22:52:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.215 22:52:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.215 22:52:14 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:22.215 22:52:14 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:22.215 22:52:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.215 ************************************ 00:01:22.215 START TEST ubsan 00:01:22.215 ************************************ 00:01:22.215 22:52:14 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:22.215 using ubsan 00:01:22.215 00:01:22.215 real 0m0.000s 00:01:22.215 user 0m0.000s 00:01:22.215 sys 0m0.000s 00:01:22.215 22:52:14 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:22.215 22:52:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.215 ************************************ 00:01:22.215 END TEST ubsan 00:01:22.215 ************************************ 00:01:22.215 22:52:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.215 22:52:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.215 22:52:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.215 22:52:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.215 22:52:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.215 22:52:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.215 22:52:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.215 22:52:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.215 22:52:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:22.215 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:22.215 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:22.784 Using 'verbs' RDMA provider 00:01:35.647 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:45.623 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:45.883 Creating mk/config.mk...done. 00:01:45.883 Creating mk/cc.flags.mk...done. 00:01:45.883 Type 'make' to build. 00:01:45.883 22:52:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:45.883 22:52:38 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:45.883 22:52:38 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:45.883 22:52:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.883 ************************************ 00:01:45.883 START TEST make 00:01:45.883 ************************************ 00:01:45.883 22:52:38 make -- common/autotest_common.sh@1124 -- $ make -j96 00:01:46.141 make[1]: Nothing to be done for 'all'. 00:01:54.259 The Meson build system 00:01:54.259 Version: 1.3.1 00:01:54.259 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:54.259 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:54.259 Build type: native build 00:01:54.259 Program cat found: YES (/usr/bin/cat) 00:01:54.259 Project name: DPDK 00:01:54.259 Project version: 24.03.0 00:01:54.259 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.259 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.259 Host machine cpu family: x86_64 00:01:54.259 Host machine cpu: x86_64 00:01:54.259 Message: ## Building in Developer Mode ## 00:01:54.259 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.259 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.259 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.259 Program python3 found: YES (/usr/bin/python3) 00:01:54.259 Program cat found: YES (/usr/bin/cat) 00:01:54.259 Compiler for C supports arguments -march=native: YES 00:01:54.259 Checking for size of "void *" : 8 00:01:54.259 Checking for size of "void *" : 8 (cached) 00:01:54.259 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:54.259 Library m found: YES 00:01:54.259 Library numa found: YES 00:01:54.259 Has header "numaif.h" : YES 00:01:54.259 Library fdt found: NO 00:01:54.259 Library execinfo found: NO 00:01:54.259 Has header "execinfo.h" : YES 00:01:54.259 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.259 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.259 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.259 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.260 Run-time dependency openssl found: YES 3.0.9 00:01:54.260 Run-time dependency libpcap found: YES 1.10.4 00:01:54.260 Has header "pcap.h" with dependency libpcap: YES 00:01:54.260 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.260 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.260 Compiler for C supports arguments -Wformat: YES 00:01:54.260 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.260 Compiler for C supports arguments -Wformat-security: NO 00:01:54.260 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.260 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.260 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.260 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.260 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.260 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.260 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.260 Compiler for C supports arguments -Wundef: YES 00:01:54.260 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.260 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.260 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.260 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.260 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.260 Program objdump found: YES (/usr/bin/objdump) 00:01:54.260 Compiler for C supports arguments -mavx512f: YES 00:01:54.260 Checking if "AVX512 checking" compiles: YES 00:01:54.260 Fetching value of define "__SSE4_2__" : 1 00:01:54.260 Fetching value of define "__AES__" : 1 00:01:54.260 Fetching value of define "__AVX__" : 1 00:01:54.260 Fetching value of define "__AVX2__" : 1 00:01:54.260 Fetching value of define "__AVX512BW__" : 1 00:01:54.260 Fetching value of define "__AVX512CD__" : 1 00:01:54.260 Fetching value of define "__AVX512DQ__" : 1 00:01:54.260 Fetching value of define "__AVX512F__" : 1 00:01:54.260 Fetching value of define "__AVX512VL__" : 1 00:01:54.260 Fetching value of define "__PCLMUL__" : 1 00:01:54.260 Fetching value of define "__RDRND__" : 1 00:01:54.260 Fetching value of define "__RDSEED__" : 1 00:01:54.260 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.260 Fetching value of define "__znver1__" : (undefined) 00:01:54.260 Fetching value of define "__znver2__" : (undefined) 00:01:54.260 Fetching value of define "__znver3__" : (undefined) 00:01:54.260 Fetching value of define "__znver4__" : (undefined) 00:01:54.260 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.260 Message: lib/log: Defining dependency "log" 00:01:54.260 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.260 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.260 Checking for function "getentropy" : NO 00:01:54.260 Message: lib/eal: Defining dependency "eal" 00:01:54.260 Message: lib/ring: Defining dependency "ring" 00:01:54.260 Message: lib/rcu: Defining dependency "rcu" 00:01:54.260 Message: lib/mempool: Defining dependency "mempool" 00:01:54.260 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.260 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.260 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.260 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.260 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.260 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.260 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:54.260 Compiler for C supports arguments -mpclmul: YES 00:01:54.260 Compiler for C supports arguments -maes: YES 00:01:54.260 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.260 Compiler for C supports arguments -mavx512bw: YES 00:01:54.260 Compiler for C supports arguments -mavx512dq: YES 00:01:54.260 Compiler for C supports arguments -mavx512vl: YES 00:01:54.260 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.260 Compiler for C supports arguments -mavx2: YES 00:01:54.260 Compiler for C supports arguments -mavx: YES 00:01:54.260 Message: lib/net: Defining dependency "net" 00:01:54.260 Message: lib/meter: Defining dependency "meter" 00:01:54.260 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.260 Message: lib/pci: Defining dependency "pci" 00:01:54.260 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.260 Message: lib/hash: Defining dependency "hash" 00:01:54.260 Message: lib/timer: Defining dependency "timer" 00:01:54.260 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.260 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.260 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.260 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.260 Message: lib/power: Defining dependency "power" 00:01:54.260 Message: lib/reorder: Defining dependency "reorder" 00:01:54.260 Message: lib/security: Defining dependency "security" 00:01:54.260 Has header "linux/userfaultfd.h" : YES 00:01:54.260 Has header "linux/vduse.h" : YES 00:01:54.260 Message: lib/vhost: Defining dependency "vhost" 00:01:54.260 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.260 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.260 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.260 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.260 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.260 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.260 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.260 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.260 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.260 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.260 Program doxygen found: YES (/usr/bin/doxygen) 00:01:54.260 Configuring doxy-api-html.conf using configuration 00:01:54.260 Configuring doxy-api-man.conf using configuration 00:01:54.260 Program mandb found: YES (/usr/bin/mandb) 00:01:54.260 Program sphinx-build found: NO 00:01:54.260 Configuring rte_build_config.h using configuration 00:01:54.260 Message: 00:01:54.260 ================= 00:01:54.260 Applications Enabled 00:01:54.260 ================= 00:01:54.260 00:01:54.260 apps: 00:01:54.260 00:01:54.260 00:01:54.260 Message: 00:01:54.260 ================= 00:01:54.260 Libraries Enabled 00:01:54.260 ================= 00:01:54.260 00:01:54.260 libs: 00:01:54.260 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.260 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.260 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.260 00:01:54.260 Message: 00:01:54.260 =============== 00:01:54.260 Drivers Enabled 00:01:54.260 =============== 00:01:54.260 00:01:54.260 common: 00:01:54.260 00:01:54.260 bus: 00:01:54.260 pci, vdev, 00:01:54.260 mempool: 00:01:54.260 ring, 00:01:54.260 dma: 00:01:54.260 00:01:54.260 net: 00:01:54.260 00:01:54.260 crypto: 00:01:54.260 00:01:54.260 compress: 00:01:54.260 00:01:54.260 vdpa: 00:01:54.260 00:01:54.260 00:01:54.260 Message: 00:01:54.260 ================= 00:01:54.260 Content Skipped 00:01:54.260 ================= 00:01:54.260 00:01:54.260 apps: 00:01:54.260 dumpcap: explicitly disabled via build config 00:01:54.260 graph: explicitly disabled via build config 00:01:54.260 pdump: explicitly disabled via build config 00:01:54.260 proc-info: explicitly disabled via build config 00:01:54.260 test-acl: explicitly disabled via build config 00:01:54.260 test-bbdev: explicitly disabled via build config 00:01:54.260 test-cmdline: explicitly disabled via build config 00:01:54.260 test-compress-perf: explicitly disabled via build config 00:01:54.260 test-crypto-perf: explicitly disabled via build config 00:01:54.260 test-dma-perf: explicitly disabled via build config 00:01:54.260 test-eventdev: explicitly disabled via build config 00:01:54.260 test-fib: explicitly disabled via build config 00:01:54.260 test-flow-perf: explicitly disabled via build config 00:01:54.260 test-gpudev: explicitly disabled via build config 00:01:54.260 test-mldev: explicitly disabled via build config 00:01:54.260 test-pipeline: explicitly disabled via build config 00:01:54.260 test-pmd: explicitly disabled via build config 00:01:54.260 test-regex: explicitly disabled via build config 00:01:54.260 test-sad: explicitly disabled via build config 00:01:54.260 test-security-perf: explicitly disabled via build config 00:01:54.260 00:01:54.260 libs: 00:01:54.260 argparse: explicitly disabled via build config 00:01:54.260 metrics: explicitly disabled via build config 00:01:54.260 acl: explicitly disabled via build config 00:01:54.260 bbdev: explicitly disabled via build config 00:01:54.260 bitratestats: explicitly disabled via build config 00:01:54.260 bpf: explicitly disabled via build config 00:01:54.260 cfgfile: explicitly disabled via build config 00:01:54.260 distributor: explicitly disabled via build config 00:01:54.260 efd: explicitly disabled via build config 00:01:54.260 eventdev: explicitly disabled via build config 00:01:54.260 dispatcher: explicitly disabled via build config 00:01:54.260 gpudev: explicitly disabled via build config 00:01:54.260 gro: explicitly disabled via build config 00:01:54.260 gso: explicitly disabled via build config 00:01:54.260 ip_frag: explicitly disabled via build config 00:01:54.260 jobstats: explicitly disabled via build config 00:01:54.260 latencystats: explicitly disabled via build config 00:01:54.260 lpm: explicitly disabled via build config 00:01:54.260 member: explicitly disabled via build config 00:01:54.260 pcapng: explicitly disabled via build config 00:01:54.260 rawdev: explicitly disabled via build config 00:01:54.260 regexdev: explicitly disabled via build config 00:01:54.260 mldev: explicitly disabled via build config 00:01:54.260 rib: explicitly disabled via build config 00:01:54.260 sched: explicitly disabled via build config 00:01:54.260 stack: explicitly disabled via build config 00:01:54.260 ipsec: explicitly disabled via build config 00:01:54.260 pdcp: explicitly disabled via build config 00:01:54.260 fib: explicitly disabled via build config 00:01:54.260 port: explicitly disabled via build config 00:01:54.260 pdump: explicitly disabled via build config 00:01:54.261 table: explicitly disabled via build config 00:01:54.261 pipeline: explicitly disabled via build config 00:01:54.261 graph: explicitly disabled via build config 00:01:54.261 node: explicitly disabled via build config 00:01:54.261 00:01:54.261 drivers: 00:01:54.261 common/cpt: not in enabled drivers build config 00:01:54.261 common/dpaax: not in enabled drivers build config 00:01:54.261 common/iavf: not in enabled drivers build config 00:01:54.261 common/idpf: not in enabled drivers build config 00:01:54.261 common/ionic: not in enabled drivers build config 00:01:54.261 common/mvep: not in enabled drivers build config 00:01:54.261 common/octeontx: not in enabled drivers build config 00:01:54.261 bus/auxiliary: not in enabled drivers build config 00:01:54.261 bus/cdx: not in enabled drivers build config 00:01:54.261 bus/dpaa: not in enabled drivers build config 00:01:54.261 bus/fslmc: not in enabled drivers build config 00:01:54.261 bus/ifpga: not in enabled drivers build config 00:01:54.261 bus/platform: not in enabled drivers build config 00:01:54.261 bus/uacce: not in enabled drivers build config 00:01:54.261 bus/vmbus: not in enabled drivers build config 00:01:54.261 common/cnxk: not in enabled drivers build config 00:01:54.261 common/mlx5: not in enabled drivers build config 00:01:54.261 common/nfp: not in enabled drivers build config 00:01:54.261 common/nitrox: not in enabled drivers build config 00:01:54.261 common/qat: not in enabled drivers build config 00:01:54.261 common/sfc_efx: not in enabled drivers build config 00:01:54.261 mempool/bucket: not in enabled drivers build config 00:01:54.261 mempool/cnxk: not in enabled drivers build config 00:01:54.261 mempool/dpaa: not in enabled drivers build config 00:01:54.261 mempool/dpaa2: not in enabled drivers build config 00:01:54.261 mempool/octeontx: not in enabled drivers build config 00:01:54.261 mempool/stack: not in enabled drivers build config 00:01:54.261 dma/cnxk: not in enabled drivers build config 00:01:54.261 dma/dpaa: not in enabled drivers build config 00:01:54.261 dma/dpaa2: not in enabled drivers build config 00:01:54.261 dma/hisilicon: not in enabled drivers build config 00:01:54.261 dma/idxd: not in enabled drivers build config 00:01:54.261 dma/ioat: not in enabled drivers build config 00:01:54.261 dma/skeleton: not in enabled drivers build config 00:01:54.261 net/af_packet: not in enabled drivers build config 00:01:54.261 net/af_xdp: not in enabled drivers build config 00:01:54.261 net/ark: not in enabled drivers build config 00:01:54.261 net/atlantic: not in enabled drivers build config 00:01:54.261 net/avp: not in enabled drivers build config 00:01:54.261 net/axgbe: not in enabled drivers build config 00:01:54.261 net/bnx2x: not in enabled drivers build config 00:01:54.261 net/bnxt: not in enabled drivers build config 00:01:54.261 net/bonding: not in enabled drivers build config 00:01:54.261 net/cnxk: not in enabled drivers build config 00:01:54.261 net/cpfl: not in enabled drivers build config 00:01:54.261 net/cxgbe: not in enabled drivers build config 00:01:54.261 net/dpaa: not in enabled drivers build config 00:01:54.261 net/dpaa2: not in enabled drivers build config 00:01:54.261 net/e1000: not in enabled drivers build config 00:01:54.261 net/ena: not in enabled drivers build config 00:01:54.261 net/enetc: not in enabled drivers build config 00:01:54.261 net/enetfec: not in enabled drivers build config 00:01:54.261 net/enic: not in enabled drivers build config 00:01:54.261 net/failsafe: not in enabled drivers build config 00:01:54.261 net/fm10k: not in enabled drivers build config 00:01:54.261 net/gve: not in enabled drivers build config 00:01:54.261 net/hinic: not in enabled drivers build config 00:01:54.261 net/hns3: not in enabled drivers build config 00:01:54.261 net/i40e: not in enabled drivers build config 00:01:54.261 net/iavf: not in enabled drivers build config 00:01:54.261 net/ice: not in enabled drivers build config 00:01:54.261 net/idpf: not in enabled drivers build config 00:01:54.261 net/igc: not in enabled drivers build config 00:01:54.261 net/ionic: not in enabled drivers build config 00:01:54.261 net/ipn3ke: not in enabled drivers build config 00:01:54.261 net/ixgbe: not in enabled drivers build config 00:01:54.261 net/mana: not in enabled drivers build config 00:01:54.261 net/memif: not in enabled drivers build config 00:01:54.261 net/mlx4: not in enabled drivers build config 00:01:54.261 net/mlx5: not in enabled drivers build config 00:01:54.261 net/mvneta: not in enabled drivers build config 00:01:54.261 net/mvpp2: not in enabled drivers build config 00:01:54.261 net/netvsc: not in enabled drivers build config 00:01:54.261 net/nfb: not in enabled drivers build config 00:01:54.261 net/nfp: not in enabled drivers build config 00:01:54.261 net/ngbe: not in enabled drivers build config 00:01:54.261 net/null: not in enabled drivers build config 00:01:54.261 net/octeontx: not in enabled drivers build config 00:01:54.261 net/octeon_ep: not in enabled drivers build config 00:01:54.261 net/pcap: not in enabled drivers build config 00:01:54.261 net/pfe: not in enabled drivers build config 00:01:54.261 net/qede: not in enabled drivers build config 00:01:54.261 net/ring: not in enabled drivers build config 00:01:54.261 net/sfc: not in enabled drivers build config 00:01:54.261 net/softnic: not in enabled drivers build config 00:01:54.261 net/tap: not in enabled drivers build config 00:01:54.261 net/thunderx: not in enabled drivers build config 00:01:54.261 net/txgbe: not in enabled drivers build config 00:01:54.261 net/vdev_netvsc: not in enabled drivers build config 00:01:54.261 net/vhost: not in enabled drivers build config 00:01:54.261 net/virtio: not in enabled drivers build config 00:01:54.261 net/vmxnet3: not in enabled drivers build config 00:01:54.261 raw/*: missing internal dependency, "rawdev" 00:01:54.261 crypto/armv8: not in enabled drivers build config 00:01:54.261 crypto/bcmfs: not in enabled drivers build config 00:01:54.261 crypto/caam_jr: not in enabled drivers build config 00:01:54.261 crypto/ccp: not in enabled drivers build config 00:01:54.261 crypto/cnxk: not in enabled drivers build config 00:01:54.261 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.261 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.262 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.262 crypto/mlx5: not in enabled drivers build config 00:01:54.262 crypto/mvsam: not in enabled drivers build config 00:01:54.262 crypto/nitrox: not in enabled drivers build config 00:01:54.262 crypto/null: not in enabled drivers build config 00:01:54.262 crypto/octeontx: not in enabled drivers build config 00:01:54.262 crypto/openssl: not in enabled drivers build config 00:01:54.262 crypto/scheduler: not in enabled drivers build config 00:01:54.262 crypto/uadk: not in enabled drivers build config 00:01:54.262 crypto/virtio: not in enabled drivers build config 00:01:54.262 compress/isal: not in enabled drivers build config 00:01:54.262 compress/mlx5: not in enabled drivers build config 00:01:54.262 compress/nitrox: not in enabled drivers build config 00:01:54.262 compress/octeontx: not in enabled drivers build config 00:01:54.262 compress/zlib: not in enabled drivers build config 00:01:54.262 regex/*: missing internal dependency, "regexdev" 00:01:54.262 ml/*: missing internal dependency, "mldev" 00:01:54.262 vdpa/ifc: not in enabled drivers build config 00:01:54.262 vdpa/mlx5: not in enabled drivers build config 00:01:54.262 vdpa/nfp: not in enabled drivers build config 00:01:54.262 vdpa/sfc: not in enabled drivers build config 00:01:54.262 event/*: missing internal dependency, "eventdev" 00:01:54.262 baseband/*: missing internal dependency, "bbdev" 00:01:54.262 gpu/*: missing internal dependency, "gpudev" 00:01:54.262 00:01:54.262 00:01:54.262 Build targets in project: 85 00:01:54.262 00:01:54.262 DPDK 24.03.0 00:01:54.262 00:01:54.262 User defined options 00:01:54.262 buildtype : debug 00:01:54.262 default_library : shared 00:01:54.262 libdir : lib 00:01:54.262 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:54.262 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.262 c_link_args : 00:01:54.262 cpu_instruction_set: native 00:01:54.262 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:54.262 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:54.262 enable_docs : false 00:01:54.262 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:54.262 enable_kmods : false 00:01:54.262 tests : false 00:01:54.262 00:01:54.262 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.529 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.529 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.529 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.529 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.529 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.795 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.795 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.795 [7/268] Linking static target lib/librte_kvargs.a 00:01:54.795 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.795 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.795 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.795 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.795 [12/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.795 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.795 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.795 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.795 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.795 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.795 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.795 [19/268] Linking static target lib/librte_log.a 00:01:54.795 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.795 [21/268] Linking static target lib/librte_pci.a 00:01:54.795 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.795 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.054 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.054 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.054 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.054 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.054 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.054 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.054 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.054 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.054 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.054 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.054 [34/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.054 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.054 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.054 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.054 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.054 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.054 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.054 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.054 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.054 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.054 [44/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.054 [45/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.054 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.054 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.054 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.054 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.054 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.312 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.312 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.312 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.312 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.312 [55/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.312 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.312 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.312 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.312 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.312 [60/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.312 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.312 [62/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.312 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.312 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.312 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.312 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.312 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.312 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.312 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.312 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.312 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.312 [72/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.312 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.312 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.312 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.312 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.312 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.312 [78/268] Linking static target lib/librte_telemetry.a 00:01:55.312 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.312 [80/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.312 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.312 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.312 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.312 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.312 [85/268] Linking static target lib/librte_meter.a 00:01:55.312 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.312 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.312 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.312 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.312 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.312 [91/268] Linking static target lib/librte_ring.a 00:01:55.312 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.312 [93/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.312 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.312 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.312 [96/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.312 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.312 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.312 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.312 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.312 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.312 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.312 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.312 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.312 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.312 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.312 [107/268] Linking static target lib/librte_mempool.a 00:01:55.312 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.312 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.312 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.312 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.312 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.312 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.312 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.312 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.312 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.312 [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.312 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.312 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.312 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.312 [121/268] Linking static target lib/librte_rcu.a 00:01:55.312 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.312 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.312 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.312 [125/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.570 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.570 [127/268] Linking static target lib/librte_net.a 00:01:55.570 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.570 [129/268] Linking static target lib/librte_cmdline.a 00:01:55.570 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.570 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.570 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.570 [133/268] Linking static target lib/librte_eal.a 00:01:55.570 [134/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.570 [135/268] Linking static target lib/librte_mbuf.a 00:01:55.570 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.570 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.570 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.570 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.570 [140/268] Linking target lib/librte_log.so.24.1 00:01:55.570 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.570 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.570 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.570 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.570 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.570 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.570 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.570 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.570 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.570 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.570 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.570 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.570 [153/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.570 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.570 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.570 [156/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.570 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.828 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.828 [159/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.828 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.828 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.828 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.828 [163/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.828 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.828 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.828 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.828 [167/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.828 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.828 [169/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.828 [170/268] Linking target lib/librte_telemetry.so.24.1 00:01:55.828 [171/268] Linking static target lib/librte_compressdev.a 00:01:55.828 [172/268] Linking static target lib/librte_timer.a 00:01:55.828 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.828 [174/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.828 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.828 [176/268] Linking static target lib/librte_reorder.a 00:01:55.828 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.828 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.828 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.828 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.828 [181/268] Linking static target lib/librte_power.a 00:01:55.828 [182/268] Linking static target lib/librte_security.a 00:01:55.828 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.828 [184/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.828 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.828 [186/268] Linking static target lib/librte_dmadev.a 00:01:55.828 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.828 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.828 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.828 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.828 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.828 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.828 [193/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.828 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.828 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.828 [196/268] Linking static target drivers/librte_bus_vdev.a 00:01:55.828 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.828 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.828 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.828 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.087 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.087 [202/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.087 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.087 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.087 [205/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.087 [206/268] Linking static target lib/librte_hash.a 00:01:56.087 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:56.087 [208/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.087 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.087 [210/268] Linking static target lib/librte_cryptodev.a 00:01:56.087 [211/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.087 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.087 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.087 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:56.087 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.087 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.345 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.345 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.345 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.345 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.345 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.602 [222/268] Linking static target lib/librte_ethdev.a 00:01:56.602 [223/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.602 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.602 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.859 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.859 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.788 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.788 [229/268] Linking static target lib/librte_vhost.a 00:01:57.788 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.683 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.943 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.943 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.943 [234/268] Linking target lib/librte_eal.so.24.1 00:02:04.943 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:04.943 [236/268] Linking target lib/librte_ring.so.24.1 00:02:04.943 [237/268] Linking target lib/librte_pci.so.24.1 00:02:04.943 [238/268] Linking target lib/librte_meter.so.24.1 00:02:04.943 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:04.943 [240/268] Linking target lib/librte_timer.so.24.1 00:02:04.943 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.201 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.201 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.201 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.201 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.201 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.201 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:05.201 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.201 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:05.460 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.460 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.460 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.460 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:05.460 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.460 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:05.460 [256/268] Linking target lib/librte_net.so.24.1 00:02:05.460 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:05.460 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:05.717 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.717 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.717 [261/268] Linking target lib/librte_hash.so.24.1 00:02:05.717 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:05.717 [263/268] Linking target lib/librte_security.so.24.1 00:02:05.717 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:05.717 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.000 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.000 [267/268] Linking target lib/librte_power.so.24.1 00:02:06.000 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:06.000 INFO: autodetecting backend as ninja 00:02:06.000 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:06.970 CC lib/ut/ut.o 00:02:06.970 CC lib/log/log.o 00:02:06.970 CC lib/log/log_flags.o 00:02:06.970 CC lib/log/log_deprecated.o 00:02:06.970 CC lib/ut_mock/mock.o 00:02:06.970 LIB libspdk_ut.a 00:02:06.970 SO libspdk_ut.so.2.0 00:02:06.970 LIB libspdk_log.a 00:02:06.970 LIB libspdk_ut_mock.a 00:02:06.970 SO libspdk_log.so.7.0 00:02:06.970 SO libspdk_ut_mock.so.6.0 00:02:06.970 SYMLINK libspdk_ut.so 00:02:07.227 SYMLINK libspdk_log.so 00:02:07.227 SYMLINK libspdk_ut_mock.so 00:02:07.484 CC lib/ioat/ioat.o 00:02:07.484 CXX lib/trace_parser/trace.o 00:02:07.484 CC lib/dma/dma.o 00:02:07.484 CC lib/util/base64.o 00:02:07.484 CC lib/util/bit_array.o 00:02:07.484 CC lib/util/cpuset.o 00:02:07.484 CC lib/util/crc32.o 00:02:07.484 CC lib/util/crc16.o 00:02:07.484 CC lib/util/crc32c.o 00:02:07.484 CC lib/util/crc32_ieee.o 00:02:07.484 CC lib/util/crc64.o 00:02:07.484 CC lib/util/dif.o 00:02:07.484 CC lib/util/fd.o 00:02:07.484 CC lib/util/file.o 00:02:07.484 CC lib/util/hexlify.o 00:02:07.484 CC lib/util/iov.o 00:02:07.484 CC lib/util/pipe.o 00:02:07.484 CC lib/util/math.o 00:02:07.484 CC lib/util/strerror_tls.o 00:02:07.484 CC lib/util/uuid.o 00:02:07.484 CC lib/util/string.o 00:02:07.484 CC lib/util/fd_group.o 00:02:07.484 CC lib/util/xor.o 00:02:07.484 CC lib/util/zipf.o 00:02:07.484 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.484 CC lib/vfio_user/host/vfio_user.o 00:02:07.484 LIB libspdk_dma.a 00:02:07.484 SO libspdk_dma.so.4.0 00:02:07.484 LIB libspdk_ioat.a 00:02:07.742 SO libspdk_ioat.so.7.0 00:02:07.742 SYMLINK libspdk_dma.so 00:02:07.742 SYMLINK libspdk_ioat.so 00:02:07.742 LIB libspdk_vfio_user.a 00:02:07.742 SO libspdk_vfio_user.so.5.0 00:02:07.742 LIB libspdk_util.a 00:02:07.742 SYMLINK libspdk_vfio_user.so 00:02:08.000 SO libspdk_util.so.9.0 00:02:08.000 SYMLINK libspdk_util.so 00:02:08.000 LIB libspdk_trace_parser.a 00:02:08.000 SO libspdk_trace_parser.so.5.0 00:02:08.258 SYMLINK libspdk_trace_parser.so 00:02:08.258 CC lib/idxd/idxd.o 00:02:08.258 CC lib/idxd/idxd_user.o 00:02:08.258 CC lib/idxd/idxd_kernel.o 00:02:08.258 CC lib/vmd/vmd.o 00:02:08.258 CC lib/vmd/led.o 00:02:08.258 CC lib/rdma/common.o 00:02:08.258 CC lib/json/json_util.o 00:02:08.258 CC lib/rdma/rdma_verbs.o 00:02:08.258 CC lib/json/json_parse.o 00:02:08.258 CC lib/json/json_write.o 00:02:08.258 CC lib/env_dpdk/env.o 00:02:08.258 CC lib/conf/conf.o 00:02:08.258 CC lib/env_dpdk/memory.o 00:02:08.258 CC lib/env_dpdk/pci.o 00:02:08.258 CC lib/env_dpdk/init.o 00:02:08.258 CC lib/env_dpdk/threads.o 00:02:08.258 CC lib/env_dpdk/pci_ioat.o 00:02:08.258 CC lib/env_dpdk/pci_virtio.o 00:02:08.258 CC lib/env_dpdk/pci_vmd.o 00:02:08.258 CC lib/env_dpdk/pci_idxd.o 00:02:08.258 CC lib/env_dpdk/pci_event.o 00:02:08.258 CC lib/env_dpdk/sigbus_handler.o 00:02:08.258 CC lib/env_dpdk/pci_dpdk.o 00:02:08.258 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.258 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.517 LIB libspdk_conf.a 00:02:08.517 LIB libspdk_json.a 00:02:08.517 LIB libspdk_rdma.a 00:02:08.517 SO libspdk_conf.so.6.0 00:02:08.517 SO libspdk_json.so.6.0 00:02:08.517 SO libspdk_rdma.so.6.0 00:02:08.517 SYMLINK libspdk_conf.so 00:02:08.775 SYMLINK libspdk_json.so 00:02:08.775 SYMLINK libspdk_rdma.so 00:02:08.775 LIB libspdk_idxd.a 00:02:08.775 SO libspdk_idxd.so.12.0 00:02:08.775 LIB libspdk_vmd.a 00:02:08.775 SYMLINK libspdk_idxd.so 00:02:08.775 SO libspdk_vmd.so.6.0 00:02:08.775 SYMLINK libspdk_vmd.so 00:02:09.033 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.034 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.034 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.034 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.034 LIB libspdk_jsonrpc.a 00:02:09.291 SO libspdk_jsonrpc.so.6.0 00:02:09.292 SYMLINK libspdk_jsonrpc.so 00:02:09.292 LIB libspdk_env_dpdk.a 00:02:09.292 SO libspdk_env_dpdk.so.14.1 00:02:09.550 SYMLINK libspdk_env_dpdk.so 00:02:09.550 CC lib/rpc/rpc.o 00:02:09.808 LIB libspdk_rpc.a 00:02:09.808 SO libspdk_rpc.so.6.0 00:02:09.808 SYMLINK libspdk_rpc.so 00:02:10.065 CC lib/trace/trace.o 00:02:10.065 CC lib/trace/trace_flags.o 00:02:10.065 CC lib/trace/trace_rpc.o 00:02:10.065 CC lib/notify/notify.o 00:02:10.065 CC lib/notify/notify_rpc.o 00:02:10.065 CC lib/keyring/keyring.o 00:02:10.065 CC lib/keyring/keyring_rpc.o 00:02:10.323 LIB libspdk_notify.a 00:02:10.323 SO libspdk_notify.so.6.0 00:02:10.323 LIB libspdk_keyring.a 00:02:10.323 LIB libspdk_trace.a 00:02:10.323 SO libspdk_keyring.so.1.0 00:02:10.323 SYMLINK libspdk_notify.so 00:02:10.323 SO libspdk_trace.so.10.0 00:02:10.323 SYMLINK libspdk_keyring.so 00:02:10.323 SYMLINK libspdk_trace.so 00:02:10.894 CC lib/sock/sock.o 00:02:10.894 CC lib/sock/sock_rpc.o 00:02:10.894 CC lib/thread/thread.o 00:02:10.894 CC lib/thread/iobuf.o 00:02:10.894 LIB libspdk_sock.a 00:02:10.894 SO libspdk_sock.so.9.0 00:02:11.154 SYMLINK libspdk_sock.so 00:02:11.411 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.411 CC lib/nvme/nvme_ctrlr.o 00:02:11.411 CC lib/nvme/nvme_fabric.o 00:02:11.411 CC lib/nvme/nvme_ns_cmd.o 00:02:11.411 CC lib/nvme/nvme_pcie_common.o 00:02:11.411 CC lib/nvme/nvme_ns.o 00:02:11.411 CC lib/nvme/nvme_pcie.o 00:02:11.411 CC lib/nvme/nvme_qpair.o 00:02:11.411 CC lib/nvme/nvme.o 00:02:11.411 CC lib/nvme/nvme_quirks.o 00:02:11.411 CC lib/nvme/nvme_transport.o 00:02:11.411 CC lib/nvme/nvme_discovery.o 00:02:11.411 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.411 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.411 CC lib/nvme/nvme_tcp.o 00:02:11.411 CC lib/nvme/nvme_opal.o 00:02:11.411 CC lib/nvme/nvme_io_msg.o 00:02:11.411 CC lib/nvme/nvme_poll_group.o 00:02:11.411 CC lib/nvme/nvme_zns.o 00:02:11.411 CC lib/nvme/nvme_stubs.o 00:02:11.411 CC lib/nvme/nvme_auth.o 00:02:11.411 CC lib/nvme/nvme_rdma.o 00:02:11.411 CC lib/nvme/nvme_cuse.o 00:02:11.669 LIB libspdk_thread.a 00:02:11.669 SO libspdk_thread.so.10.0 00:02:11.926 SYMLINK libspdk_thread.so 00:02:12.184 CC lib/accel/accel.o 00:02:12.184 CC lib/virtio/virtio.o 00:02:12.184 CC lib/accel/accel_sw.o 00:02:12.184 CC lib/accel/accel_rpc.o 00:02:12.184 CC lib/virtio/virtio_vhost_user.o 00:02:12.184 CC lib/virtio/virtio_vfio_user.o 00:02:12.184 CC lib/virtio/virtio_pci.o 00:02:12.184 CC lib/blob/blobstore.o 00:02:12.184 CC lib/blob/request.o 00:02:12.184 CC lib/blob/blob_bs_dev.o 00:02:12.184 CC lib/init/json_config.o 00:02:12.184 CC lib/blob/zeroes.o 00:02:12.184 CC lib/init/subsystem.o 00:02:12.184 CC lib/init/subsystem_rpc.o 00:02:12.184 CC lib/init/rpc.o 00:02:12.442 LIB libspdk_init.a 00:02:12.442 LIB libspdk_virtio.a 00:02:12.442 SO libspdk_init.so.5.0 00:02:12.442 SO libspdk_virtio.so.7.0 00:02:12.442 SYMLINK libspdk_init.so 00:02:12.442 SYMLINK libspdk_virtio.so 00:02:12.701 CC lib/event/app.o 00:02:12.701 CC lib/event/reactor.o 00:02:12.701 CC lib/event/log_rpc.o 00:02:12.701 CC lib/event/scheduler_static.o 00:02:12.701 CC lib/event/app_rpc.o 00:02:12.701 LIB libspdk_accel.a 00:02:12.958 SO libspdk_accel.so.15.0 00:02:12.958 LIB libspdk_nvme.a 00:02:12.958 SYMLINK libspdk_accel.so 00:02:12.958 SO libspdk_nvme.so.13.0 00:02:12.958 LIB libspdk_event.a 00:02:12.958 SO libspdk_event.so.13.1 00:02:13.216 SYMLINK libspdk_event.so 00:02:13.216 CC lib/bdev/bdev.o 00:02:13.216 CC lib/bdev/bdev_rpc.o 00:02:13.216 CC lib/bdev/bdev_zone.o 00:02:13.216 CC lib/bdev/scsi_nvme.o 00:02:13.216 CC lib/bdev/part.o 00:02:13.216 SYMLINK libspdk_nvme.so 00:02:14.150 LIB libspdk_blob.a 00:02:14.150 SO libspdk_blob.so.11.0 00:02:14.150 SYMLINK libspdk_blob.so 00:02:14.407 CC lib/blobfs/blobfs.o 00:02:14.407 CC lib/blobfs/tree.o 00:02:14.665 CC lib/lvol/lvol.o 00:02:14.923 LIB libspdk_bdev.a 00:02:14.923 SO libspdk_bdev.so.15.0 00:02:15.180 LIB libspdk_blobfs.a 00:02:15.180 SYMLINK libspdk_bdev.so 00:02:15.180 SO libspdk_blobfs.so.10.0 00:02:15.180 SYMLINK libspdk_blobfs.so 00:02:15.180 LIB libspdk_lvol.a 00:02:15.180 SO libspdk_lvol.so.10.0 00:02:15.180 SYMLINK libspdk_lvol.so 00:02:15.438 CC lib/nvmf/ctrlr.o 00:02:15.438 CC lib/nvmf/ctrlr_discovery.o 00:02:15.438 CC lib/nvmf/ctrlr_bdev.o 00:02:15.438 CC lib/scsi/lun.o 00:02:15.438 CC lib/nvmf/subsystem.o 00:02:15.438 CC lib/scsi/dev.o 00:02:15.438 CC lib/nvmf/nvmf.o 00:02:15.438 CC lib/nvmf/transport.o 00:02:15.438 CC lib/nvmf/nvmf_rpc.o 00:02:15.438 CC lib/nvmf/tcp.o 00:02:15.438 CC lib/scsi/port.o 00:02:15.438 CC lib/nvmf/stubs.o 00:02:15.438 CC lib/nvmf/mdns_server.o 00:02:15.438 CC lib/scsi/scsi.o 00:02:15.438 CC lib/scsi/scsi_bdev.o 00:02:15.438 CC lib/nvmf/rdma.o 00:02:15.438 CC lib/scsi/scsi_pr.o 00:02:15.438 CC lib/nvmf/auth.o 00:02:15.438 CC lib/scsi/task.o 00:02:15.438 CC lib/scsi/scsi_rpc.o 00:02:15.438 CC lib/nbd/nbd.o 00:02:15.438 CC lib/nbd/nbd_rpc.o 00:02:15.438 CC lib/ublk/ublk.o 00:02:15.438 CC lib/ublk/ublk_rpc.o 00:02:15.438 CC lib/ftl/ftl_init.o 00:02:15.438 CC lib/ftl/ftl_core.o 00:02:15.438 CC lib/ftl/ftl_layout.o 00:02:15.438 CC lib/ftl/ftl_debug.o 00:02:15.438 CC lib/ftl/ftl_io.o 00:02:15.438 CC lib/ftl/ftl_sb.o 00:02:15.438 CC lib/ftl/ftl_nv_cache.o 00:02:15.438 CC lib/ftl/ftl_l2p.o 00:02:15.438 CC lib/ftl/ftl_l2p_flat.o 00:02:15.438 CC lib/ftl/ftl_band.o 00:02:15.438 CC lib/ftl/ftl_rq.o 00:02:15.438 CC lib/ftl/ftl_band_ops.o 00:02:15.438 CC lib/ftl/ftl_writer.o 00:02:15.438 CC lib/ftl/ftl_reloc.o 00:02:15.438 CC lib/ftl/ftl_l2p_cache.o 00:02:15.438 CC lib/ftl/ftl_p2l.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.438 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.438 CC lib/ftl/utils/ftl_conf.o 00:02:15.438 CC lib/ftl/utils/ftl_md.o 00:02:15.438 CC lib/ftl/utils/ftl_mempool.o 00:02:15.438 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.438 CC lib/ftl/utils/ftl_property.o 00:02:15.438 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.438 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.438 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.438 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.438 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.438 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.438 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.439 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.439 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.439 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.439 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.439 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.439 CC lib/ftl/base/ftl_base_dev.o 00:02:15.439 CC lib/ftl/ftl_trace.o 00:02:16.004 LIB libspdk_scsi.a 00:02:16.004 SO libspdk_scsi.so.9.0 00:02:16.004 LIB libspdk_nbd.a 00:02:16.004 SO libspdk_nbd.so.7.0 00:02:16.004 SYMLINK libspdk_scsi.so 00:02:16.004 SYMLINK libspdk_nbd.so 00:02:16.263 LIB libspdk_ublk.a 00:02:16.263 SO libspdk_ublk.so.3.0 00:02:16.263 SYMLINK libspdk_ublk.so 00:02:16.263 CC lib/iscsi/conn.o 00:02:16.263 CC lib/iscsi/init_grp.o 00:02:16.263 CC lib/iscsi/iscsi.o 00:02:16.263 CC lib/iscsi/md5.o 00:02:16.263 CC lib/iscsi/param.o 00:02:16.263 CC lib/iscsi/portal_grp.o 00:02:16.263 CC lib/iscsi/tgt_node.o 00:02:16.263 CC lib/iscsi/iscsi_subsystem.o 00:02:16.263 CC lib/iscsi/iscsi_rpc.o 00:02:16.263 CC lib/iscsi/task.o 00:02:16.263 CC lib/vhost/vhost_scsi.o 00:02:16.263 CC lib/vhost/vhost.o 00:02:16.263 CC lib/vhost/vhost_rpc.o 00:02:16.263 CC lib/vhost/vhost_blk.o 00:02:16.263 CC lib/vhost/rte_vhost_user.o 00:02:16.522 LIB libspdk_ftl.a 00:02:16.522 SO libspdk_ftl.so.9.0 00:02:16.781 SYMLINK libspdk_ftl.so 00:02:17.039 LIB libspdk_nvmf.a 00:02:17.039 SO libspdk_nvmf.so.18.1 00:02:17.039 LIB libspdk_vhost.a 00:02:17.039 SO libspdk_vhost.so.8.0 00:02:17.300 SYMLINK libspdk_vhost.so 00:02:17.300 SYMLINK libspdk_nvmf.so 00:02:17.300 LIB libspdk_iscsi.a 00:02:17.300 SO libspdk_iscsi.so.8.0 00:02:17.560 SYMLINK libspdk_iscsi.so 00:02:17.818 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.074 LIB libspdk_env_dpdk_rpc.a 00:02:18.074 CC module/blob/bdev/blob_bdev.o 00:02:18.074 CC module/accel/ioat/accel_ioat.o 00:02:18.074 CC module/accel/error/accel_error.o 00:02:18.074 CC module/accel/error/accel_error_rpc.o 00:02:18.074 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.074 CC module/accel/iaa/accel_iaa.o 00:02:18.074 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.074 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.074 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.074 CC module/accel/dsa/accel_dsa.o 00:02:18.074 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.074 CC module/sock/posix/posix.o 00:02:18.074 CC module/keyring/file/keyring.o 00:02:18.074 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.074 CC module/keyring/file/keyring_rpc.o 00:02:18.074 CC module/keyring/linux/keyring_rpc.o 00:02:18.074 CC module/keyring/linux/keyring.o 00:02:18.074 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.074 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.074 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.333 LIB libspdk_keyring_file.a 00:02:18.333 LIB libspdk_accel_ioat.a 00:02:18.333 LIB libspdk_scheduler_gscheduler.a 00:02:18.333 LIB libspdk_keyring_linux.a 00:02:18.333 LIB libspdk_accel_error.a 00:02:18.333 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.333 LIB libspdk_accel_iaa.a 00:02:18.333 LIB libspdk_scheduler_dynamic.a 00:02:18.333 SO libspdk_accel_error.so.2.0 00:02:18.333 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.333 SO libspdk_keyring_file.so.1.0 00:02:18.333 SO libspdk_accel_ioat.so.6.0 00:02:18.333 SO libspdk_keyring_linux.so.1.0 00:02:18.333 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.333 SO libspdk_accel_iaa.so.3.0 00:02:18.333 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.333 LIB libspdk_accel_dsa.a 00:02:18.333 LIB libspdk_blob_bdev.a 00:02:18.333 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.333 SYMLINK libspdk_keyring_linux.so 00:02:18.333 SYMLINK libspdk_accel_error.so 00:02:18.333 SYMLINK libspdk_keyring_file.so 00:02:18.333 SYMLINK libspdk_accel_ioat.so 00:02:18.333 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.333 SO libspdk_blob_bdev.so.11.0 00:02:18.333 SO libspdk_accel_dsa.so.5.0 00:02:18.333 SYMLINK libspdk_accel_iaa.so 00:02:18.333 SYMLINK libspdk_blob_bdev.so 00:02:18.333 SYMLINK libspdk_accel_dsa.so 00:02:18.591 LIB libspdk_sock_posix.a 00:02:18.591 SO libspdk_sock_posix.so.6.0 00:02:18.849 SYMLINK libspdk_sock_posix.so 00:02:18.849 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.849 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.849 CC module/bdev/raid/bdev_raid.o 00:02:18.849 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.849 CC module/bdev/null/bdev_null.o 00:02:18.849 CC module/bdev/null/bdev_null_rpc.o 00:02:18.849 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.849 CC module/bdev/raid/raid1.o 00:02:18.849 CC module/bdev/raid/raid0.o 00:02:18.849 CC module/bdev/raid/concat.o 00:02:18.849 CC module/bdev/gpt/gpt.o 00:02:18.849 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.849 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.849 CC module/bdev/split/vbdev_split.o 00:02:18.849 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.849 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.849 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.849 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.849 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.849 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.849 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.849 CC module/bdev/delay/vbdev_delay.o 00:02:18.849 CC module/bdev/malloc/bdev_malloc.o 00:02:18.849 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.849 CC module/bdev/error/vbdev_error.o 00:02:18.849 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.849 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.849 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.849 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.849 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.849 CC module/bdev/nvme/bdev_nvme.o 00:02:18.849 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.849 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.849 CC module/bdev/nvme/nvme_rpc.o 00:02:18.849 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.849 CC module/bdev/ftl/bdev_ftl.o 00:02:18.849 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.849 CC module/bdev/nvme/vbdev_opal.o 00:02:18.849 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.849 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.849 CC module/bdev/aio/bdev_aio.o 00:02:18.849 CC module/bdev/aio/bdev_aio_rpc.o 00:02:19.107 LIB libspdk_blobfs_bdev.a 00:02:19.107 LIB libspdk_bdev_split.a 00:02:19.107 SO libspdk_blobfs_bdev.so.6.0 00:02:19.107 SO libspdk_bdev_split.so.6.0 00:02:19.107 LIB libspdk_bdev_null.a 00:02:19.107 LIB libspdk_bdev_gpt.a 00:02:19.107 SO libspdk_bdev_null.so.6.0 00:02:19.107 SO libspdk_bdev_gpt.so.6.0 00:02:19.107 LIB libspdk_bdev_ftl.a 00:02:19.107 SYMLINK libspdk_blobfs_bdev.so 00:02:19.107 LIB libspdk_bdev_error.a 00:02:19.107 LIB libspdk_bdev_passthru.a 00:02:19.107 SYMLINK libspdk_bdev_split.so 00:02:19.107 SO libspdk_bdev_passthru.so.6.0 00:02:19.107 SO libspdk_bdev_error.so.6.0 00:02:19.107 SO libspdk_bdev_ftl.so.6.0 00:02:19.107 LIB libspdk_bdev_zone_block.a 00:02:19.107 LIB libspdk_bdev_malloc.a 00:02:19.107 LIB libspdk_bdev_aio.a 00:02:19.107 LIB libspdk_bdev_iscsi.a 00:02:19.107 SYMLINK libspdk_bdev_gpt.so 00:02:19.107 SYMLINK libspdk_bdev_null.so 00:02:19.107 LIB libspdk_bdev_delay.a 00:02:19.107 SO libspdk_bdev_malloc.so.6.0 00:02:19.107 SO libspdk_bdev_zone_block.so.6.0 00:02:19.107 SO libspdk_bdev_aio.so.6.0 00:02:19.365 SO libspdk_bdev_iscsi.so.6.0 00:02:19.365 SYMLINK libspdk_bdev_error.so 00:02:19.365 SYMLINK libspdk_bdev_passthru.so 00:02:19.365 SYMLINK libspdk_bdev_ftl.so 00:02:19.365 SO libspdk_bdev_delay.so.6.0 00:02:19.365 SYMLINK libspdk_bdev_aio.so 00:02:19.365 SYMLINK libspdk_bdev_zone_block.so 00:02:19.365 SYMLINK libspdk_bdev_malloc.so 00:02:19.365 SYMLINK libspdk_bdev_iscsi.so 00:02:19.365 LIB libspdk_bdev_lvol.a 00:02:19.365 LIB libspdk_bdev_virtio.a 00:02:19.365 SYMLINK libspdk_bdev_delay.so 00:02:19.365 SO libspdk_bdev_lvol.so.6.0 00:02:19.365 SO libspdk_bdev_virtio.so.6.0 00:02:19.365 SYMLINK libspdk_bdev_lvol.so 00:02:19.365 SYMLINK libspdk_bdev_virtio.so 00:02:19.624 LIB libspdk_bdev_raid.a 00:02:19.624 SO libspdk_bdev_raid.so.6.0 00:02:19.624 SYMLINK libspdk_bdev_raid.so 00:02:20.558 LIB libspdk_bdev_nvme.a 00:02:20.558 SO libspdk_bdev_nvme.so.7.0 00:02:20.558 SYMLINK libspdk_bdev_nvme.so 00:02:21.123 CC module/event/subsystems/vmd/vmd.o 00:02:21.123 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.123 CC module/event/subsystems/sock/sock.o 00:02:21.123 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.123 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.123 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.123 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.123 CC module/event/subsystems/keyring/keyring.o 00:02:21.123 LIB libspdk_event_sock.a 00:02:21.123 LIB libspdk_event_vmd.a 00:02:21.123 LIB libspdk_event_vhost_blk.a 00:02:21.123 LIB libspdk_event_scheduler.a 00:02:21.382 LIB libspdk_event_keyring.a 00:02:21.382 LIB libspdk_event_iobuf.a 00:02:21.382 SO libspdk_event_sock.so.5.0 00:02:21.382 SO libspdk_event_vmd.so.6.0 00:02:21.382 SO libspdk_event_vhost_blk.so.3.0 00:02:21.382 SO libspdk_event_scheduler.so.4.0 00:02:21.382 SO libspdk_event_keyring.so.1.0 00:02:21.382 SO libspdk_event_iobuf.so.3.0 00:02:21.382 SYMLINK libspdk_event_sock.so 00:02:21.382 SYMLINK libspdk_event_vmd.so 00:02:21.382 SYMLINK libspdk_event_vhost_blk.so 00:02:21.382 SYMLINK libspdk_event_keyring.so 00:02:21.382 SYMLINK libspdk_event_scheduler.so 00:02:21.382 SYMLINK libspdk_event_iobuf.so 00:02:21.640 CC module/event/subsystems/accel/accel.o 00:02:21.640 LIB libspdk_event_accel.a 00:02:21.898 SO libspdk_event_accel.so.6.0 00:02:21.898 SYMLINK libspdk_event_accel.so 00:02:22.156 CC module/event/subsystems/bdev/bdev.o 00:02:22.156 LIB libspdk_event_bdev.a 00:02:22.414 SO libspdk_event_bdev.so.6.0 00:02:22.414 SYMLINK libspdk_event_bdev.so 00:02:22.671 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.671 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.671 CC module/event/subsystems/scsi/scsi.o 00:02:22.671 CC module/event/subsystems/ublk/ublk.o 00:02:22.671 CC module/event/subsystems/nbd/nbd.o 00:02:22.671 LIB libspdk_event_scsi.a 00:02:22.671 LIB libspdk_event_ublk.a 00:02:22.961 LIB libspdk_event_nbd.a 00:02:22.961 SO libspdk_event_scsi.so.6.0 00:02:22.961 LIB libspdk_event_nvmf.a 00:02:22.961 SO libspdk_event_ublk.so.3.0 00:02:22.961 SO libspdk_event_nbd.so.6.0 00:02:22.961 SO libspdk_event_nvmf.so.6.0 00:02:22.961 SYMLINK libspdk_event_scsi.so 00:02:22.961 SYMLINK libspdk_event_ublk.so 00:02:22.961 SYMLINK libspdk_event_nbd.so 00:02:22.961 SYMLINK libspdk_event_nvmf.so 00:02:23.220 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.220 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.220 LIB libspdk_event_iscsi.a 00:02:23.220 LIB libspdk_event_vhost_scsi.a 00:02:23.220 SO libspdk_event_iscsi.so.6.0 00:02:23.478 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.479 SYMLINK libspdk_event_iscsi.so 00:02:23.479 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.479 SO libspdk.so.6.0 00:02:23.479 SYMLINK libspdk.so 00:02:23.736 CXX app/trace/trace.o 00:02:23.736 CC app/trace_record/trace_record.o 00:02:23.736 CC app/spdk_lspci/spdk_lspci.o 00:02:23.736 CC app/spdk_nvme_perf/perf.o 00:02:23.736 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.736 CC app/spdk_nvme_identify/identify.o 00:02:24.007 TEST_HEADER include/spdk/accel.h 00:02:24.007 TEST_HEADER include/spdk/accel_module.h 00:02:24.007 TEST_HEADER include/spdk/barrier.h 00:02:24.007 CC app/spdk_top/spdk_top.o 00:02:24.007 TEST_HEADER include/spdk/base64.h 00:02:24.007 TEST_HEADER include/spdk/assert.h 00:02:24.007 TEST_HEADER include/spdk/bdev.h 00:02:24.007 TEST_HEADER include/spdk/bdev_module.h 00:02:24.007 TEST_HEADER include/spdk/bit_pool.h 00:02:24.007 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.007 TEST_HEADER include/spdk/bit_array.h 00:02:24.007 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.007 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.007 TEST_HEADER include/spdk/blobfs.h 00:02:24.007 TEST_HEADER include/spdk/blob.h 00:02:24.007 TEST_HEADER include/spdk/config.h 00:02:24.007 TEST_HEADER include/spdk/conf.h 00:02:24.007 TEST_HEADER include/spdk/cpuset.h 00:02:24.007 TEST_HEADER include/spdk/crc16.h 00:02:24.007 CC test/rpc_client/rpc_client_test.o 00:02:24.007 TEST_HEADER include/spdk/crc32.h 00:02:24.007 TEST_HEADER include/spdk/crc64.h 00:02:24.007 TEST_HEADER include/spdk/dif.h 00:02:24.007 TEST_HEADER include/spdk/dma.h 00:02:24.007 TEST_HEADER include/spdk/endian.h 00:02:24.007 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.007 TEST_HEADER include/spdk/env.h 00:02:24.007 TEST_HEADER include/spdk/event.h 00:02:24.007 TEST_HEADER include/spdk/fd.h 00:02:24.007 TEST_HEADER include/spdk/fd_group.h 00:02:24.007 TEST_HEADER include/spdk/file.h 00:02:24.007 TEST_HEADER include/spdk/ftl.h 00:02:24.007 TEST_HEADER include/spdk/hexlify.h 00:02:24.007 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.007 TEST_HEADER include/spdk/idxd.h 00:02:24.007 TEST_HEADER include/spdk/histogram_data.h 00:02:24.007 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.007 TEST_HEADER include/spdk/ioat.h 00:02:24.007 TEST_HEADER include/spdk/init.h 00:02:24.007 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.007 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.007 TEST_HEADER include/spdk/json.h 00:02:24.007 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.007 TEST_HEADER include/spdk/keyring_module.h 00:02:24.007 TEST_HEADER include/spdk/log.h 00:02:24.007 TEST_HEADER include/spdk/likely.h 00:02:24.007 TEST_HEADER include/spdk/keyring.h 00:02:24.007 TEST_HEADER include/spdk/lvol.h 00:02:24.007 TEST_HEADER include/spdk/memory.h 00:02:24.007 TEST_HEADER include/spdk/mmio.h 00:02:24.007 CC app/spdk_dd/spdk_dd.o 00:02:24.007 TEST_HEADER include/spdk/nbd.h 00:02:24.007 CC app/nvmf_tgt/nvmf_main.o 00:02:24.007 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.007 TEST_HEADER include/spdk/notify.h 00:02:24.007 TEST_HEADER include/spdk/nvme.h 00:02:24.007 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.007 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.007 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.007 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.007 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.007 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.007 TEST_HEADER include/spdk/nvmf.h 00:02:24.007 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.007 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.007 TEST_HEADER include/spdk/opal.h 00:02:24.007 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.007 TEST_HEADER include/spdk/opal_spec.h 00:02:24.007 TEST_HEADER include/spdk/pci_ids.h 00:02:24.007 TEST_HEADER include/spdk/pipe.h 00:02:24.007 TEST_HEADER include/spdk/queue.h 00:02:24.007 TEST_HEADER include/spdk/reduce.h 00:02:24.007 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.007 CC app/vhost/vhost.o 00:02:24.007 TEST_HEADER include/spdk/scheduler.h 00:02:24.007 TEST_HEADER include/spdk/rpc.h 00:02:24.007 TEST_HEADER include/spdk/scsi.h 00:02:24.007 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.007 TEST_HEADER include/spdk/sock.h 00:02:24.007 TEST_HEADER include/spdk/stdinc.h 00:02:24.007 TEST_HEADER include/spdk/string.h 00:02:24.007 TEST_HEADER include/spdk/trace.h 00:02:24.007 TEST_HEADER include/spdk/thread.h 00:02:24.007 TEST_HEADER include/spdk/trace_parser.h 00:02:24.007 TEST_HEADER include/spdk/tree.h 00:02:24.007 TEST_HEADER include/spdk/ublk.h 00:02:24.007 TEST_HEADER include/spdk/util.h 00:02:24.007 TEST_HEADER include/spdk/uuid.h 00:02:24.007 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.007 TEST_HEADER include/spdk/version.h 00:02:24.007 TEST_HEADER include/spdk/vhost.h 00:02:24.007 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.007 CC app/spdk_tgt/spdk_tgt.o 00:02:24.007 TEST_HEADER include/spdk/vmd.h 00:02:24.007 TEST_HEADER include/spdk/zipf.h 00:02:24.007 TEST_HEADER include/spdk/xor.h 00:02:24.007 CXX test/cpp_headers/accel_module.o 00:02:24.007 CXX test/cpp_headers/accel.o 00:02:24.007 CXX test/cpp_headers/assert.o 00:02:24.007 CXX test/cpp_headers/barrier.o 00:02:24.007 CXX test/cpp_headers/base64.o 00:02:24.007 CXX test/cpp_headers/bdev.o 00:02:24.007 CXX test/cpp_headers/bdev_module.o 00:02:24.007 CXX test/cpp_headers/bdev_zone.o 00:02:24.007 CXX test/cpp_headers/bit_array.o 00:02:24.007 CXX test/cpp_headers/bit_pool.o 00:02:24.007 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.007 CXX test/cpp_headers/blob_bdev.o 00:02:24.007 CXX test/cpp_headers/blob.o 00:02:24.007 CXX test/cpp_headers/blobfs.o 00:02:24.007 CXX test/cpp_headers/conf.o 00:02:24.007 CXX test/cpp_headers/config.o 00:02:24.007 CXX test/cpp_headers/cpuset.o 00:02:24.007 CXX test/cpp_headers/crc16.o 00:02:24.007 CXX test/cpp_headers/crc32.o 00:02:24.007 CXX test/cpp_headers/dif.o 00:02:24.007 CXX test/cpp_headers/crc64.o 00:02:24.007 CC examples/nvme/reconnect/reconnect.o 00:02:24.007 CC examples/nvme/arbitration/arbitration.o 00:02:24.007 CC examples/nvme/hello_world/hello_world.o 00:02:24.008 CC examples/nvme/abort/abort.o 00:02:24.008 CXX test/cpp_headers/dma.o 00:02:24.008 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:24.008 CC test/event/event_perf/event_perf.o 00:02:24.008 CC test/app/histogram_perf/histogram_perf.o 00:02:24.008 CC examples/vmd/led/led.o 00:02:24.008 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.008 CC app/fio/nvme/fio_plugin.o 00:02:24.008 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:24.008 CC examples/ioat/perf/perf.o 00:02:24.008 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.008 CC test/env/pci/pci_ut.o 00:02:24.008 CC examples/util/zipf/zipf.o 00:02:24.008 CC examples/ioat/verify/verify.o 00:02:24.008 CC test/nvme/fused_ordering/fused_ordering.o 00:02:24.008 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:24.008 CC test/nvme/fdp/fdp.o 00:02:24.008 CC test/nvme/connect_stress/connect_stress.o 00:02:24.008 CC test/nvme/boot_partition/boot_partition.o 00:02:24.008 CC test/nvme/aer/aer.o 00:02:24.008 CC test/env/vtophys/vtophys.o 00:02:24.008 CC test/event/reactor/reactor.o 00:02:24.277 CC test/nvme/compliance/nvme_compliance.o 00:02:24.277 CC examples/idxd/perf/perf.o 00:02:24.277 CC test/nvme/cuse/cuse.o 00:02:24.277 CC test/app/jsoncat/jsoncat.o 00:02:24.277 CC test/env/memory/memory_ut.o 00:02:24.277 CC test/nvme/err_injection/err_injection.o 00:02:24.277 CC examples/bdev/hello_world/hello_bdev.o 00:02:24.277 CC test/nvme/e2edp/nvme_dp.o 00:02:24.277 CC test/nvme/overhead/overhead.o 00:02:24.277 CC test/nvme/sgl/sgl.o 00:02:24.277 CC test/nvme/simple_copy/simple_copy.o 00:02:24.277 CC test/accel/dif/dif.o 00:02:24.277 CC test/nvme/reset/reset.o 00:02:24.277 CC test/app/stub/stub.o 00:02:24.277 CC test/nvme/startup/startup.o 00:02:24.277 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.277 CC examples/nvme/hotplug/hotplug.o 00:02:24.277 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:24.277 CC test/thread/poller_perf/poller_perf.o 00:02:24.277 CC test/event/app_repeat/app_repeat.o 00:02:24.277 CC examples/sock/hello_world/hello_sock.o 00:02:24.277 CC examples/accel/perf/accel_perf.o 00:02:24.277 CC test/nvme/reserve/reserve.o 00:02:24.277 CC examples/thread/thread/thread_ex.o 00:02:24.277 CC test/event/scheduler/scheduler.o 00:02:24.277 CC test/event/reactor_perf/reactor_perf.o 00:02:24.277 CC test/bdev/bdevio/bdevio.o 00:02:24.277 CC app/fio/bdev/fio_plugin.o 00:02:24.277 CC test/app/bdev_svc/bdev_svc.o 00:02:24.277 CC examples/blob/hello_world/hello_blob.o 00:02:24.277 CC examples/blob/cli/blobcli.o 00:02:24.277 CC examples/nvmf/nvmf/nvmf.o 00:02:24.277 CC test/blobfs/mkfs/mkfs.o 00:02:24.277 CC test/dma/test_dma/test_dma.o 00:02:24.277 LINK spdk_lspci 00:02:24.277 LINK rpc_client_test 00:02:24.277 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.277 LINK interrupt_tgt 00:02:24.277 LINK vhost 00:02:24.541 CC test/lvol/esnap/esnap.o 00:02:24.541 LINK nvmf_tgt 00:02:24.541 LINK iscsi_tgt 00:02:24.541 LINK spdk_nvme_discover 00:02:24.541 LINK spdk_tgt 00:02:24.541 LINK event_perf 00:02:24.541 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.541 CXX test/cpp_headers/endian.o 00:02:24.541 LINK poller_perf 00:02:24.541 LINK cmb_copy 00:02:24.541 CXX test/cpp_headers/env_dpdk.o 00:02:24.541 CXX test/cpp_headers/env.o 00:02:24.541 LINK pmr_persistence 00:02:24.541 CXX test/cpp_headers/event.o 00:02:24.541 CXX test/cpp_headers/fd_group.o 00:02:24.541 CXX test/cpp_headers/fd.o 00:02:24.541 LINK lsvmd 00:02:24.541 LINK spdk_trace_record 00:02:24.541 LINK stub 00:02:24.541 CXX test/cpp_headers/file.o 00:02:24.541 CXX test/cpp_headers/ftl.o 00:02:24.541 LINK err_injection 00:02:24.541 LINK histogram_perf 00:02:24.541 LINK bdev_svc 00:02:24.541 LINK reactor 00:02:24.541 LINK vtophys 00:02:24.541 LINK zipf 00:02:24.541 LINK ioat_perf 00:02:24.541 LINK led 00:02:24.541 LINK jsoncat 00:02:24.541 LINK env_dpdk_post_init 00:02:24.541 CXX test/cpp_headers/gpt_spec.o 00:02:24.541 LINK reserve 00:02:24.541 LINK reactor_perf 00:02:24.541 LINK app_repeat 00:02:24.541 CXX test/cpp_headers/hexlify.o 00:02:24.801 LINK fused_ordering 00:02:24.801 LINK boot_partition 00:02:24.801 LINK hello_blob 00:02:24.801 LINK connect_stress 00:02:24.801 LINK thread 00:02:24.801 LINK startup 00:02:24.801 LINK doorbell_aers 00:02:24.801 LINK nvme_dp 00:02:24.801 LINK aer 00:02:24.801 LINK hello_world 00:02:24.801 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.801 CXX test/cpp_headers/histogram_data.o 00:02:24.801 CXX test/cpp_headers/idxd.o 00:02:24.801 LINK reset 00:02:24.801 CXX test/cpp_headers/idxd_spec.o 00:02:24.801 CXX test/cpp_headers/ioat.o 00:02:24.801 CXX test/cpp_headers/init.o 00:02:24.801 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.801 CXX test/cpp_headers/ioat_spec.o 00:02:24.801 LINK verify 00:02:24.801 LINK hello_bdev 00:02:24.801 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.801 LINK overhead 00:02:24.801 CXX test/cpp_headers/iscsi_spec.o 00:02:24.801 CXX test/cpp_headers/json.o 00:02:24.801 LINK reconnect 00:02:24.801 LINK simple_copy 00:02:24.801 LINK idxd_perf 00:02:24.801 CXX test/cpp_headers/jsonrpc.o 00:02:24.801 LINK nvme_compliance 00:02:24.801 CXX test/cpp_headers/keyring.o 00:02:24.801 LINK scheduler 00:02:24.801 LINK mkfs 00:02:24.801 CXX test/cpp_headers/likely.o 00:02:24.801 CXX test/cpp_headers/keyring_module.o 00:02:24.801 LINK hello_sock 00:02:24.801 LINK sgl 00:02:24.801 LINK abort 00:02:24.801 CXX test/cpp_headers/log.o 00:02:24.801 CXX test/cpp_headers/lvol.o 00:02:24.801 LINK spdk_dd 00:02:24.801 CXX test/cpp_headers/memory.o 00:02:24.801 LINK hotplug 00:02:24.801 LINK nvmf 00:02:24.801 CXX test/cpp_headers/mmio.o 00:02:24.801 CXX test/cpp_headers/nbd.o 00:02:24.801 CXX test/cpp_headers/notify.o 00:02:24.801 CXX test/cpp_headers/nvme.o 00:02:24.801 LINK pci_ut 00:02:24.801 CXX test/cpp_headers/nvme_intel.o 00:02:24.801 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.801 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.801 LINK fdp 00:02:24.801 CXX test/cpp_headers/nvme_zns.o 00:02:24.801 CXX test/cpp_headers/nvme_spec.o 00:02:24.801 LINK bdevio 00:02:24.801 LINK arbitration 00:02:24.801 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.801 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.801 CXX test/cpp_headers/nvmf.o 00:02:24.801 CXX test/cpp_headers/nvmf_spec.o 00:02:24.801 CXX test/cpp_headers/nvmf_transport.o 00:02:24.801 CXX test/cpp_headers/opal.o 00:02:25.060 CXX test/cpp_headers/opal_spec.o 00:02:25.060 CXX test/cpp_headers/pci_ids.o 00:02:25.060 CXX test/cpp_headers/pipe.o 00:02:25.060 CXX test/cpp_headers/queue.o 00:02:25.060 CXX test/cpp_headers/rpc.o 00:02:25.060 CXX test/cpp_headers/reduce.o 00:02:25.060 CXX test/cpp_headers/scsi.o 00:02:25.060 CXX test/cpp_headers/scheduler.o 00:02:25.060 CXX test/cpp_headers/scsi_spec.o 00:02:25.060 CXX test/cpp_headers/sock.o 00:02:25.060 LINK nvme_manage 00:02:25.060 CXX test/cpp_headers/stdinc.o 00:02:25.060 CXX test/cpp_headers/string.o 00:02:25.060 CXX test/cpp_headers/thread.o 00:02:25.060 CXX test/cpp_headers/trace.o 00:02:25.060 LINK accel_perf 00:02:25.060 CXX test/cpp_headers/trace_parser.o 00:02:25.060 CXX test/cpp_headers/tree.o 00:02:25.060 CXX test/cpp_headers/ublk.o 00:02:25.060 CXX test/cpp_headers/util.o 00:02:25.060 LINK spdk_trace 00:02:25.060 LINK test_dma 00:02:25.060 CXX test/cpp_headers/uuid.o 00:02:25.060 CXX test/cpp_headers/version.o 00:02:25.060 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.060 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.060 CXX test/cpp_headers/vhost.o 00:02:25.060 CXX test/cpp_headers/vmd.o 00:02:25.060 CXX test/cpp_headers/xor.o 00:02:25.060 CXX test/cpp_headers/zipf.o 00:02:25.060 LINK blobcli 00:02:25.060 LINK spdk_bdev 00:02:25.060 LINK dif 00:02:25.060 LINK nvme_fuzz 00:02:25.318 LINK spdk_nvme 00:02:25.318 LINK spdk_nvme_perf 00:02:25.318 LINK mem_callbacks 00:02:25.318 LINK spdk_top 00:02:25.318 LINK spdk_nvme_identify 00:02:25.576 LINK vhost_fuzz 00:02:25.576 LINK bdevperf 00:02:25.835 LINK memory_ut 00:02:25.835 LINK cuse 00:02:26.403 LINK iscsi_fuzz 00:02:28.306 LINK esnap 00:02:28.565 00:02:28.565 real 0m42.619s 00:02:28.565 user 6m43.800s 00:02:28.565 sys 3m31.146s 00:02:28.565 22:53:20 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:28.565 22:53:20 make -- common/autotest_common.sh@10 -- $ set +x 00:02:28.565 ************************************ 00:02:28.565 END TEST make 00:02:28.565 ************************************ 00:02:28.565 22:53:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:28.565 22:53:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:28.565 22:53:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:28.565 22:53:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.565 22:53:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:28.565 22:53:20 -- pm/common@44 -- $ pid=628456 00:02:28.565 22:53:20 -- pm/common@50 -- $ kill -TERM 628456 00:02:28.565 22:53:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.565 22:53:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:28.565 22:53:20 -- pm/common@44 -- $ pid=628457 00:02:28.565 22:53:20 -- pm/common@50 -- $ kill -TERM 628457 00:02:28.565 22:53:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.565 22:53:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:28.565 22:53:20 -- pm/common@44 -- $ pid=628459 00:02:28.565 22:53:20 -- pm/common@50 -- $ kill -TERM 628459 00:02:28.565 22:53:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.565 22:53:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:28.565 22:53:20 -- pm/common@44 -- $ pid=628479 00:02:28.565 22:53:20 -- pm/common@50 -- $ sudo -E kill -TERM 628479 00:02:28.565 22:53:20 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:28.565 22:53:20 -- nvmf/common.sh@7 -- # uname -s 00:02:28.565 22:53:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.565 22:53:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.565 22:53:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.565 22:53:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.565 22:53:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.565 22:53:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.565 22:53:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.565 22:53:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.565 22:53:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.565 22:53:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.565 22:53:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:02:28.565 22:53:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:02:28.565 22:53:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.565 22:53:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.565 22:53:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:28.565 22:53:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:28.565 22:53:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:28.565 22:53:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.565 22:53:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.565 22:53:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.565 22:53:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.565 22:53:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.565 22:53:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.565 22:53:20 -- paths/export.sh@5 -- # export PATH 00:02:28.565 22:53:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.565 22:53:20 -- nvmf/common.sh@47 -- # : 0 00:02:28.565 22:53:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:28.565 22:53:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:28.565 22:53:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:28.565 22:53:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.565 22:53:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.565 22:53:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:28.565 22:53:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:28.565 22:53:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:28.565 22:53:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.565 22:53:20 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.565 22:53:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.565 22:53:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.565 22:53:20 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.565 22:53:20 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.565 22:53:20 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.565 22:53:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.825 22:53:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.825 22:53:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.825 22:53:20 -- spdk/autotest.sh@48 -- # udevadm_pid=686670 00:02:28.825 22:53:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:28.825 22:53:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.825 22:53:20 -- pm/common@17 -- # local monitor 00:02:28.825 22:53:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.825 22:53:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.825 22:53:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.825 22:53:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.825 22:53:20 -- pm/common@21 -- # date +%s 00:02:28.825 22:53:20 -- pm/common@21 -- # date +%s 00:02:28.825 22:53:20 -- pm/common@25 -- # sleep 1 00:02:28.825 22:53:20 -- pm/common@21 -- # date +%s 00:02:28.825 22:53:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793600 00:02:28.825 22:53:20 -- pm/common@21 -- # date +%s 00:02:28.825 22:53:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793600 00:02:28.825 22:53:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793600 00:02:28.825 22:53:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717793600 00:02:28.825 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793600_collect-vmstat.pm.log 00:02:28.825 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793600_collect-cpu-load.pm.log 00:02:28.825 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793600_collect-cpu-temp.pm.log 00:02:28.825 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717793600_collect-bmc-pm.bmc.pm.log 00:02:29.762 22:53:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:29.762 22:53:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:29.762 22:53:21 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:29.762 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:02:29.762 22:53:21 -- spdk/autotest.sh@59 -- # create_test_list 00:02:29.762 22:53:21 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:29.762 22:53:21 -- common/autotest_common.sh@10 -- # set +x 00:02:29.762 22:53:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:29.762 22:53:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.762 22:53:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.762 22:53:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:29.762 22:53:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.762 22:53:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:29.762 22:53:21 -- common/autotest_common.sh@1454 -- # uname 00:02:29.762 22:53:21 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:29.763 22:53:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:29.763 22:53:21 -- common/autotest_common.sh@1474 -- # uname 00:02:29.763 22:53:21 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:29.763 22:53:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:29.763 22:53:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:29.763 22:53:21 -- spdk/autotest.sh@72 -- # hash lcov 00:02:29.763 22:53:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:29.763 22:53:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:29.763 --rc lcov_branch_coverage=1 00:02:29.763 --rc lcov_function_coverage=1 00:02:29.763 --rc genhtml_branch_coverage=1 00:02:29.763 --rc genhtml_function_coverage=1 00:02:29.763 --rc genhtml_legend=1 00:02:29.763 --rc geninfo_all_blocks=1 00:02:29.763 ' 00:02:29.763 22:53:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:29.763 --rc lcov_branch_coverage=1 00:02:29.763 --rc lcov_function_coverage=1 00:02:29.763 --rc genhtml_branch_coverage=1 00:02:29.763 --rc genhtml_function_coverage=1 00:02:29.763 --rc genhtml_legend=1 00:02:29.763 --rc geninfo_all_blocks=1 00:02:29.763 ' 00:02:29.763 22:53:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:29.763 --rc lcov_branch_coverage=1 00:02:29.763 --rc lcov_function_coverage=1 00:02:29.763 --rc genhtml_branch_coverage=1 00:02:29.763 --rc genhtml_function_coverage=1 00:02:29.763 --rc genhtml_legend=1 00:02:29.763 --rc geninfo_all_blocks=1 00:02:29.763 --no-external' 00:02:29.763 22:53:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:29.763 --rc lcov_branch_coverage=1 00:02:29.763 --rc lcov_function_coverage=1 00:02:29.763 --rc genhtml_branch_coverage=1 00:02:29.763 --rc genhtml_function_coverage=1 00:02:29.763 --rc genhtml_legend=1 00:02:29.763 --rc geninfo_all_blocks=1 00:02:29.763 --no-external' 00:02:29.763 22:53:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:29.763 lcov: LCOV version 1.14 00:02:29.763 22:53:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:39.736 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:39.736 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:51.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:51.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:51.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:51.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:51.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:51.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:52.507 22:53:44 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:52.507 22:53:44 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:52.507 22:53:44 -- common/autotest_common.sh@10 -- # set +x 00:02:52.507 22:53:44 -- spdk/autotest.sh@91 -- # rm -f 00:02:52.507 22:53:44 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.794 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:55.794 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.794 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.794 22:53:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:55.794 22:53:47 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:55.794 22:53:47 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:55.794 22:53:47 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:55.794 22:53:47 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:55.794 22:53:47 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:55.794 22:53:47 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:55.794 22:53:47 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.794 22:53:47 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:55.794 22:53:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:55.794 22:53:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:55.794 22:53:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:55.794 22:53:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:55.794 22:53:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:55.794 22:53:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:55.794 No valid GPT data, bailing 00:02:55.794 22:53:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:55.794 22:53:47 -- scripts/common.sh@391 -- # pt= 00:02:55.794 22:53:47 -- scripts/common.sh@392 -- # return 1 00:02:55.794 22:53:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:55.794 1+0 records in 00:02:55.794 1+0 records out 00:02:55.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0018301 s, 573 MB/s 00:02:55.794 22:53:47 -- spdk/autotest.sh@118 -- # sync 00:02:55.794 22:53:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:55.794 22:53:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:55.794 22:53:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.107 22:53:52 -- spdk/autotest.sh@124 -- # uname -s 00:03:01.107 22:53:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:01.107 22:53:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.107 22:53:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:01.107 22:53:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:01.107 22:53:52 -- common/autotest_common.sh@10 -- # set +x 00:03:01.107 ************************************ 00:03:01.107 START TEST setup.sh 00:03:01.107 ************************************ 00:03:01.107 22:53:52 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.107 * Looking for test storage... 00:03:01.107 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:01.108 22:53:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:01.108 22:53:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:01.108 22:53:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:01.108 22:53:52 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:01.108 22:53:52 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:01.108 22:53:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.108 ************************************ 00:03:01.108 START TEST acl 00:03:01.108 ************************************ 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:01.108 * Looking for test storage... 00:03:01.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:01.108 22:53:52 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.108 22:53:52 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:01.108 22:53:52 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:01.108 22:53:52 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:01.108 22:53:52 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:01.108 22:53:52 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:01.108 22:53:52 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:01.108 22:53:52 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.108 22:53:52 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.396 22:53:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.396 22:53:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:04.396 22:53:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.396 22:53:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:04.396 22:53:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.396 22:53:56 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:06.933 Hugepages 00:03:06.933 node hugesize free / total 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 00:03:06.933 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:06.933 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.192 22:53:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.192 22:53:59 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:07.192 22:53:59 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:07.192 22:53:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.192 ************************************ 00:03:07.192 START TEST denied 00:03:07.192 ************************************ 00:03:07.192 22:53:59 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:07.192 22:53:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:03:07.192 22:53:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:07.192 22:53:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:03:07.192 22:53:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.192 22:53:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:10.484 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.484 22:54:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.669 00:03:14.669 real 0m7.427s 00:03:14.669 user 0m2.449s 00:03:14.669 sys 0m4.296s 00:03:14.669 22:54:06 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:14.669 22:54:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.669 ************************************ 00:03:14.669 END TEST denied 00:03:14.669 ************************************ 00:03:14.669 22:54:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.669 22:54:06 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:14.669 22:54:06 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:14.669 22:54:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.669 ************************************ 00:03:14.669 START TEST allowed 00:03:14.669 ************************************ 00:03:14.669 22:54:06 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:14.669 22:54:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:03:14.669 22:54:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:03:14.669 22:54:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.669 22:54:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.669 22:54:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:19.937 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:19.937 22:54:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:19.937 22:54:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:19.937 22:54:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:19.937 22:54:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.937 22:54:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.471 00:03:22.471 real 0m7.828s 00:03:22.471 user 0m2.313s 00:03:22.471 sys 0m4.100s 00:03:22.471 22:54:14 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:22.471 22:54:14 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:22.471 ************************************ 00:03:22.471 END TEST allowed 00:03:22.471 ************************************ 00:03:22.471 00:03:22.471 real 0m21.990s 00:03:22.471 user 0m7.279s 00:03:22.471 sys 0m12.819s 00:03:22.471 22:54:14 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:22.471 22:54:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:22.471 ************************************ 00:03:22.471 END TEST acl 00:03:22.471 ************************************ 00:03:22.471 22:54:14 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:22.471 22:54:14 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:22.471 22:54:14 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:22.471 22:54:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.471 ************************************ 00:03:22.471 START TEST hugepages 00:03:22.471 ************************************ 00:03:22.471 22:54:14 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:22.731 * Looking for test storage... 00:03:22.731 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 168468416 kB' 'MemAvailable: 171475324 kB' 'Buffers: 4132 kB' 'Cached: 14537868 kB' 'SwapCached: 0 kB' 'Active: 11622452 kB' 'Inactive: 3540592 kB' 'Active(anon): 11145480 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624440 kB' 'Mapped: 241620 kB' 'Shmem: 10524436 kB' 'KReclaimable: 267116 kB' 'Slab: 896152 kB' 'SReclaimable: 267116 kB' 'SUnreclaim: 629036 kB' 'KernelStack: 20928 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982036 kB' 'Committed_AS: 12645788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317868 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.731 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.733 22:54:14 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:22.733 22:54:14 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:22.733 22:54:14 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:22.733 22:54:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.733 ************************************ 00:03:22.733 START TEST default_setup 00:03:22.733 ************************************ 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.733 22:54:14 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:26.021 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:26.021 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.403 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170590456 kB' 'MemAvailable: 173597028 kB' 'Buffers: 4132 kB' 'Cached: 14537976 kB' 'SwapCached: 0 kB' 'Active: 11639688 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162716 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641536 kB' 'Mapped: 241600 kB' 'Shmem: 10524544 kB' 'KReclaimable: 266444 kB' 'Slab: 893968 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627524 kB' 'KernelStack: 21280 kB' 'PageTables: 10208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12662192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.403 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.404 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170598532 kB' 'MemAvailable: 173605104 kB' 'Buffers: 4132 kB' 'Cached: 14537976 kB' 'SwapCached: 0 kB' 'Active: 11639484 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162512 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641140 kB' 'Mapped: 241608 kB' 'Shmem: 10524544 kB' 'KReclaimable: 266444 kB' 'Slab: 893844 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627400 kB' 'KernelStack: 21200 kB' 'PageTables: 10120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12662208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317964 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.405 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.406 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170598412 kB' 'MemAvailable: 173604984 kB' 'Buffers: 4132 kB' 'Cached: 14538000 kB' 'SwapCached: 0 kB' 'Active: 11639024 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162052 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640844 kB' 'Mapped: 241600 kB' 'Shmem: 10524568 kB' 'KReclaimable: 266444 kB' 'Slab: 893820 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627376 kB' 'KernelStack: 21120 kB' 'PageTables: 9808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12662228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317916 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.407 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.408 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.409 nr_hugepages=1024 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.409 resv_hugepages=0 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.409 surplus_hugepages=0 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.409 anon_hugepages=0 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.409 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170601060 kB' 'MemAvailable: 173607632 kB' 'Buffers: 4132 kB' 'Cached: 14538020 kB' 'SwapCached: 0 kB' 'Active: 11639624 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162652 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641432 kB' 'Mapped: 241600 kB' 'Shmem: 10524588 kB' 'KReclaimable: 266444 kB' 'Slab: 893652 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627208 kB' 'KernelStack: 21184 kB' 'PageTables: 9656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12662252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318124 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.671 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:27.672 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83404472 kB' 'MemUsed: 14211156 kB' 'SwapCached: 0 kB' 'Active: 7469436 kB' 'Inactive: 3343336 kB' 'Active(anon): 7251304 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10366272 kB' 'Mapped: 179460 kB' 'AnonPages: 449752 kB' 'Shmem: 6804804 kB' 'KernelStack: 14056 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 525484 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 342952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.673 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.674 node0=1024 expecting 1024 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.674 00:03:27.674 real 0m4.829s 00:03:27.674 user 0m1.387s 00:03:27.674 sys 0m2.086s 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.674 22:54:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:27.674 ************************************ 00:03:27.674 END TEST default_setup 00:03:27.674 ************************************ 00:03:27.674 22:54:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:27.674 22:54:19 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.674 22:54:19 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.674 22:54:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.674 ************************************ 00:03:27.674 START TEST per_node_1G_alloc 00:03:27.674 ************************************ 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.674 22:54:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:31.034 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.034 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.034 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170613340 kB' 'MemAvailable: 173619912 kB' 'Buffers: 4132 kB' 'Cached: 14538116 kB' 'SwapCached: 0 kB' 'Active: 11639592 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162620 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640752 kB' 'Mapped: 241712 kB' 'Shmem: 10524684 kB' 'KReclaimable: 266444 kB' 'Slab: 894016 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627572 kB' 'KernelStack: 20848 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12660400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317964 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.034 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.035 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170613756 kB' 'MemAvailable: 173620328 kB' 'Buffers: 4132 kB' 'Cached: 14538120 kB' 'SwapCached: 0 kB' 'Active: 11639616 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162644 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640788 kB' 'Mapped: 241684 kB' 'Shmem: 10524688 kB' 'KReclaimable: 266444 kB' 'Slab: 894016 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627572 kB' 'KernelStack: 20848 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12660416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317948 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.036 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.037 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170613828 kB' 'MemAvailable: 173620400 kB' 'Buffers: 4132 kB' 'Cached: 14538140 kB' 'SwapCached: 0 kB' 'Active: 11638984 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162012 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640636 kB' 'Mapped: 241604 kB' 'Shmem: 10524708 kB' 'KReclaimable: 266444 kB' 'Slab: 893996 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627552 kB' 'KernelStack: 20896 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12660240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317916 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.038 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.039 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.040 22:54:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.040 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.040 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.040 nr_hugepages=1024 00:03:31.040 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.040 resv_hugepages=0 00:03:31.040 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.041 surplus_hugepages=0 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.041 anon_hugepages=0 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170613808 kB' 'MemAvailable: 173620380 kB' 'Buffers: 4132 kB' 'Cached: 14538160 kB' 'SwapCached: 0 kB' 'Active: 11638776 kB' 'Inactive: 3540592 kB' 'Active(anon): 11161804 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640344 kB' 'Mapped: 241604 kB' 'Shmem: 10524728 kB' 'KReclaimable: 266444 kB' 'Slab: 893980 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627536 kB' 'KernelStack: 20816 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12660460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317884 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.041 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.042 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 84449972 kB' 'MemUsed: 13165656 kB' 'SwapCached: 0 kB' 'Active: 7468580 kB' 'Inactive: 3343336 kB' 'Active(anon): 7250448 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10366320 kB' 'Mapped: 179472 kB' 'AnonPages: 448800 kB' 'Shmem: 6804852 kB' 'KernelStack: 13528 kB' 'PageTables: 6584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 525860 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 343328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.043 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 86163584 kB' 'MemUsed: 7601956 kB' 'SwapCached: 0 kB' 'Active: 4169796 kB' 'Inactive: 197256 kB' 'Active(anon): 3910956 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4175972 kB' 'Mapped: 62132 kB' 'AnonPages: 191140 kB' 'Shmem: 3719876 kB' 'KernelStack: 7288 kB' 'PageTables: 2532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83912 kB' 'Slab: 368120 kB' 'SReclaimable: 83912 kB' 'SUnreclaim: 284208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.044 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.045 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:31.046 node0=512 expecting 512 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:31.046 node1=512 expecting 512 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:31.046 00:03:31.046 real 0m3.259s 00:03:31.046 user 0m1.281s 00:03:31.046 sys 0m2.014s 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:31.046 22:54:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.046 ************************************ 00:03:31.046 END TEST per_node_1G_alloc 00:03:31.046 ************************************ 00:03:31.046 22:54:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:31.046 22:54:23 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:31.046 22:54:23 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:31.046 22:54:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.046 ************************************ 00:03:31.046 START TEST even_2G_alloc 00:03:31.046 ************************************ 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.046 22:54:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:34.343 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.343 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.343 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170618172 kB' 'MemAvailable: 173624744 kB' 'Buffers: 4132 kB' 'Cached: 14538276 kB' 'SwapCached: 0 kB' 'Active: 11638660 kB' 'Inactive: 3540592 kB' 'Active(anon): 11161688 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639524 kB' 'Mapped: 240612 kB' 'Shmem: 10524844 kB' 'KReclaimable: 266444 kB' 'Slab: 893368 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626924 kB' 'KernelStack: 20816 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12653656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317932 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.343 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.344 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.345 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170618064 kB' 'MemAvailable: 173624636 kB' 'Buffers: 4132 kB' 'Cached: 14538280 kB' 'SwapCached: 0 kB' 'Active: 11637776 kB' 'Inactive: 3540592 kB' 'Active(anon): 11160804 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639164 kB' 'Mapped: 240612 kB' 'Shmem: 10524848 kB' 'KReclaimable: 266444 kB' 'Slab: 893404 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626960 kB' 'KernelStack: 20800 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12653672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317916 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.346 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.347 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170618124 kB' 'MemAvailable: 173624696 kB' 'Buffers: 4132 kB' 'Cached: 14538280 kB' 'SwapCached: 0 kB' 'Active: 11637776 kB' 'Inactive: 3540592 kB' 'Active(anon): 11160804 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639164 kB' 'Mapped: 240612 kB' 'Shmem: 10524848 kB' 'KReclaimable: 266444 kB' 'Slab: 893404 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626960 kB' 'KernelStack: 20800 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12653696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317916 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.348 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.349 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.350 nr_hugepages=1024 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.350 resv_hugepages=0 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.350 surplus_hugepages=0 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.350 anon_hugepages=0 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170618684 kB' 'MemAvailable: 173625256 kB' 'Buffers: 4132 kB' 'Cached: 14538316 kB' 'SwapCached: 0 kB' 'Active: 11638188 kB' 'Inactive: 3540592 kB' 'Active(anon): 11161216 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639540 kB' 'Mapped: 240620 kB' 'Shmem: 10524884 kB' 'KReclaimable: 266444 kB' 'Slab: 893404 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626960 kB' 'KernelStack: 20768 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12656324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317932 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.350 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.351 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 84452296 kB' 'MemUsed: 13163332 kB' 'SwapCached: 0 kB' 'Active: 7469568 kB' 'Inactive: 3343336 kB' 'Active(anon): 7251436 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10366504 kB' 'Mapped: 179092 kB' 'AnonPages: 449628 kB' 'Shmem: 6805036 kB' 'KernelStack: 13576 kB' 'PageTables: 6376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 525820 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 343288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.352 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 86165984 kB' 'MemUsed: 7599556 kB' 'SwapCached: 0 kB' 'Active: 4168744 kB' 'Inactive: 197256 kB' 'Active(anon): 3909904 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4175968 kB' 'Mapped: 61528 kB' 'AnonPages: 190036 kB' 'Shmem: 3719872 kB' 'KernelStack: 7368 kB' 'PageTables: 2356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83912 kB' 'Slab: 367584 kB' 'SReclaimable: 83912 kB' 'SUnreclaim: 283672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.353 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.354 node0=512 expecting 512 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.354 node1=512 expecting 512 00:03:34.354 22:54:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.354 00:03:34.355 real 0m3.325s 00:03:34.355 user 0m1.344s 00:03:34.355 sys 0m2.053s 00:03:34.355 22:54:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.355 22:54:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.355 ************************************ 00:03:34.355 END TEST even_2G_alloc 00:03:34.355 ************************************ 00:03:34.355 22:54:26 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:34.355 22:54:26 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.355 22:54:26 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.355 22:54:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.355 ************************************ 00:03:34.355 START TEST odd_alloc 00:03:34.355 ************************************ 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.355 22:54:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:37.646 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.646 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.646 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.646 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170675492 kB' 'MemAvailable: 173682064 kB' 'Buffers: 4132 kB' 'Cached: 14538432 kB' 'SwapCached: 0 kB' 'Active: 11640136 kB' 'Inactive: 3540592 kB' 'Active(anon): 11163164 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640896 kB' 'Mapped: 240736 kB' 'Shmem: 10525000 kB' 'KReclaimable: 266444 kB' 'Slab: 893392 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626948 kB' 'KernelStack: 20944 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 12654368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.647 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170675248 kB' 'MemAvailable: 173681820 kB' 'Buffers: 4132 kB' 'Cached: 14538432 kB' 'SwapCached: 0 kB' 'Active: 11638596 kB' 'Inactive: 3540592 kB' 'Active(anon): 11161624 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639768 kB' 'Mapped: 240628 kB' 'Shmem: 10525000 kB' 'KReclaimable: 266444 kB' 'Slab: 893328 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626884 kB' 'KernelStack: 20880 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 12654384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.648 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.649 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170675312 kB' 'MemAvailable: 173681884 kB' 'Buffers: 4132 kB' 'Cached: 14538452 kB' 'SwapCached: 0 kB' 'Active: 11638856 kB' 'Inactive: 3540592 kB' 'Active(anon): 11161884 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640076 kB' 'Mapped: 240628 kB' 'Shmem: 10525020 kB' 'KReclaimable: 266444 kB' 'Slab: 893328 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626884 kB' 'KernelStack: 20832 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 12654404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.650 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.651 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:37.652 nr_hugepages=1025 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.652 resv_hugepages=0 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.652 surplus_hugepages=0 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.652 anon_hugepages=0 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170675312 kB' 'MemAvailable: 173681884 kB' 'Buffers: 4132 kB' 'Cached: 14538476 kB' 'SwapCached: 0 kB' 'Active: 11638360 kB' 'Inactive: 3540592 kB' 'Active(anon): 11161388 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639516 kB' 'Mapped: 240628 kB' 'Shmem: 10525044 kB' 'KReclaimable: 266444 kB' 'Slab: 893328 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 626884 kB' 'KernelStack: 20816 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 12654424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.652 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.653 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 84490904 kB' 'MemUsed: 13124724 kB' 'SwapCached: 0 kB' 'Active: 7470232 kB' 'Inactive: 3343336 kB' 'Active(anon): 7252100 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10366660 kB' 'Mapped: 179108 kB' 'AnonPages: 450064 kB' 'Shmem: 6805192 kB' 'KernelStack: 13512 kB' 'PageTables: 6512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 525736 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 343204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.654 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 86184156 kB' 'MemUsed: 7581384 kB' 'SwapCached: 0 kB' 'Active: 4168252 kB' 'Inactive: 197256 kB' 'Active(anon): 3909412 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4175968 kB' 'Mapped: 61520 kB' 'AnonPages: 189540 kB' 'Shmem: 3719872 kB' 'KernelStack: 7304 kB' 'PageTables: 2484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83912 kB' 'Slab: 367592 kB' 'SReclaimable: 83912 kB' 'SUnreclaim: 283680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.655 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.656 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:37.657 node0=512 expecting 513 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:37.657 node1=513 expecting 512 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:37.657 00:03:37.657 real 0m3.256s 00:03:37.657 user 0m1.317s 00:03:37.657 sys 0m1.990s 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:37.657 22:54:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:37.657 ************************************ 00:03:37.657 END TEST odd_alloc 00:03:37.657 ************************************ 00:03:37.657 22:54:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:37.657 22:54:29 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:37.657 22:54:29 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:37.657 22:54:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.657 ************************************ 00:03:37.657 START TEST custom_alloc 00:03:37.657 ************************************ 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.657 22:54:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:40.959 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.959 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.959 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.959 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169616928 kB' 'MemAvailable: 172623500 kB' 'Buffers: 4132 kB' 'Cached: 14538584 kB' 'SwapCached: 0 kB' 'Active: 11639884 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162912 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641000 kB' 'Mapped: 240660 kB' 'Shmem: 10525152 kB' 'KReclaimable: 266444 kB' 'Slab: 893536 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627092 kB' 'KernelStack: 20832 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 12655036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318012 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.960 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169616812 kB' 'MemAvailable: 172623384 kB' 'Buffers: 4132 kB' 'Cached: 14538588 kB' 'SwapCached: 0 kB' 'Active: 11639484 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162512 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640636 kB' 'Mapped: 240640 kB' 'Shmem: 10525156 kB' 'KReclaimable: 266444 kB' 'Slab: 893572 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627128 kB' 'KernelStack: 20816 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 12655052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.961 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169616812 kB' 'MemAvailable: 172623384 kB' 'Buffers: 4132 kB' 'Cached: 14538604 kB' 'SwapCached: 0 kB' 'Active: 11639440 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162468 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640564 kB' 'Mapped: 240640 kB' 'Shmem: 10525172 kB' 'KReclaimable: 266444 kB' 'Slab: 893580 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627136 kB' 'KernelStack: 20816 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 12655072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.962 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:40.963 nr_hugepages=1536 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.963 resv_hugepages=0 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.963 surplus_hugepages=0 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.963 anon_hugepages=0 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169616812 kB' 'MemAvailable: 172623384 kB' 'Buffers: 4132 kB' 'Cached: 14538608 kB' 'SwapCached: 0 kB' 'Active: 11639124 kB' 'Inactive: 3540592 kB' 'Active(anon): 11162152 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640252 kB' 'Mapped: 240640 kB' 'Shmem: 10525176 kB' 'KReclaimable: 266444 kB' 'Slab: 893580 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627136 kB' 'KernelStack: 20816 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 12655096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.963 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 84478444 kB' 'MemUsed: 13137184 kB' 'SwapCached: 0 kB' 'Active: 7469528 kB' 'Inactive: 3343336 kB' 'Active(anon): 7251396 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10366792 kB' 'Mapped: 179120 kB' 'AnonPages: 449220 kB' 'Shmem: 6805324 kB' 'KernelStack: 13496 kB' 'PageTables: 6480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 525652 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 343120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.964 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 85138116 kB' 'MemUsed: 8627424 kB' 'SwapCached: 0 kB' 'Active: 4170360 kB' 'Inactive: 197256 kB' 'Active(anon): 3911520 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4175988 kB' 'Mapped: 61520 kB' 'AnonPages: 191788 kB' 'Shmem: 3719892 kB' 'KernelStack: 7352 kB' 'PageTables: 2628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83912 kB' 'Slab: 367928 kB' 'SReclaimable: 83912 kB' 'SUnreclaim: 284016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.965 node0=512 expecting 512 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:40.965 node1=1024 expecting 1024 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:40.965 00:03:40.965 real 0m3.215s 00:03:40.965 user 0m1.257s 00:03:40.965 sys 0m2.023s 00:03:40.965 22:54:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:40.966 22:54:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.966 ************************************ 00:03:40.966 END TEST custom_alloc 00:03:40.966 ************************************ 00:03:40.966 22:54:33 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:40.966 22:54:33 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:40.966 22:54:33 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:40.966 22:54:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.966 ************************************ 00:03:40.966 START TEST no_shrink_alloc 00:03:40.966 ************************************ 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.966 22:54:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:43.500 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.500 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.500 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.500 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:43.500 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.500 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683124 kB' 'MemAvailable: 173689696 kB' 'Buffers: 4132 kB' 'Cached: 14538736 kB' 'SwapCached: 0 kB' 'Active: 11640572 kB' 'Inactive: 3540592 kB' 'Active(anon): 11163600 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641080 kB' 'Mapped: 240760 kB' 'Shmem: 10525304 kB' 'KReclaimable: 266444 kB' 'Slab: 894240 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627796 kB' 'KernelStack: 20832 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12655444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317900 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.501 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.502 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683144 kB' 'MemAvailable: 173689716 kB' 'Buffers: 4132 kB' 'Cached: 14538740 kB' 'SwapCached: 0 kB' 'Active: 11640496 kB' 'Inactive: 3540592 kB' 'Active(anon): 11163524 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641008 kB' 'Mapped: 240732 kB' 'Shmem: 10525308 kB' 'KReclaimable: 266444 kB' 'Slab: 894044 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627600 kB' 'KernelStack: 20832 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12655464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317868 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.503 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.504 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.505 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683624 kB' 'MemAvailable: 173690196 kB' 'Buffers: 4132 kB' 'Cached: 14538752 kB' 'SwapCached: 0 kB' 'Active: 11639996 kB' 'Inactive: 3540592 kB' 'Active(anon): 11163024 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640924 kB' 'Mapped: 240652 kB' 'Shmem: 10525320 kB' 'KReclaimable: 266444 kB' 'Slab: 894036 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627592 kB' 'KernelStack: 20816 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12655484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317884 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.506 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.507 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.508 nr_hugepages=1024 00:03:43.508 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.508 resv_hugepages=0 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.509 surplus_hugepages=0 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.509 anon_hugepages=0 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683624 kB' 'MemAvailable: 173690196 kB' 'Buffers: 4132 kB' 'Cached: 14538752 kB' 'SwapCached: 0 kB' 'Active: 11639996 kB' 'Inactive: 3540592 kB' 'Active(anon): 11163024 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640924 kB' 'Mapped: 240652 kB' 'Shmem: 10525320 kB' 'KReclaimable: 266444 kB' 'Slab: 894036 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627592 kB' 'KernelStack: 20816 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12655508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317884 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.509 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.510 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.511 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83450412 kB' 'MemUsed: 14165216 kB' 'SwapCached: 0 kB' 'Active: 7469556 kB' 'Inactive: 3343336 kB' 'Active(anon): 7251424 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10366912 kB' 'Mapped: 179132 kB' 'AnonPages: 449168 kB' 'Shmem: 6805444 kB' 'KernelStack: 13528 kB' 'PageTables: 6572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 526108 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 343576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.512 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.513 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.514 node0=1024 expecting 1024 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.514 22:54:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:46.807 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.807 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.807 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.807 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.807 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683856 kB' 'MemAvailable: 173690428 kB' 'Buffers: 4132 kB' 'Cached: 14538876 kB' 'SwapCached: 0 kB' 'Active: 11641884 kB' 'Inactive: 3540592 kB' 'Active(anon): 11164912 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641880 kB' 'Mapped: 241076 kB' 'Shmem: 10525444 kB' 'KReclaimable: 266444 kB' 'Slab: 893784 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627340 kB' 'KernelStack: 21040 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12658576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318188 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.808 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170685592 kB' 'MemAvailable: 173692164 kB' 'Buffers: 4132 kB' 'Cached: 14538876 kB' 'SwapCached: 0 kB' 'Active: 11641080 kB' 'Inactive: 3540592 kB' 'Active(anon): 11164108 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641984 kB' 'Mapped: 240728 kB' 'Shmem: 10525444 kB' 'KReclaimable: 266444 kB' 'Slab: 893776 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627332 kB' 'KernelStack: 21056 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12658592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318108 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.809 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.810 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683828 kB' 'MemAvailable: 173690400 kB' 'Buffers: 4132 kB' 'Cached: 14538896 kB' 'SwapCached: 0 kB' 'Active: 11641276 kB' 'Inactive: 3540592 kB' 'Active(anon): 11164304 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642084 kB' 'Mapped: 240668 kB' 'Shmem: 10525464 kB' 'KReclaimable: 266444 kB' 'Slab: 893776 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627332 kB' 'KernelStack: 20992 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12658616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318076 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.811 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.812 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.813 nr_hugepages=1024 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.813 resv_hugepages=0 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.813 surplus_hugepages=0 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.813 anon_hugepages=0 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 170683264 kB' 'MemAvailable: 173689836 kB' 'Buffers: 4132 kB' 'Cached: 14538916 kB' 'SwapCached: 0 kB' 'Active: 11640784 kB' 'Inactive: 3540592 kB' 'Active(anon): 11163812 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641600 kB' 'Mapped: 240668 kB' 'Shmem: 10525484 kB' 'KReclaimable: 266444 kB' 'Slab: 893776 kB' 'SReclaimable: 266444 kB' 'SUnreclaim: 627332 kB' 'KernelStack: 20832 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 12656028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318028 kB' 'VmallocChunk: 0 kB' 'Percpu: 70656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3705812 kB' 'DirectMap2M: 42110976 kB' 'DirectMap1G: 156237824 kB' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.813 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.814 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83450956 kB' 'MemUsed: 14164672 kB' 'SwapCached: 0 kB' 'Active: 7469500 kB' 'Inactive: 3343336 kB' 'Active(anon): 7251368 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10367020 kB' 'Mapped: 179140 kB' 'AnonPages: 449016 kB' 'Shmem: 6805552 kB' 'KernelStack: 13496 kB' 'PageTables: 6468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182532 kB' 'Slab: 525952 kB' 'SReclaimable: 182532 kB' 'SUnreclaim: 343420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.815 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.816 node0=1024 expecting 1024 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.816 00:03:46.816 real 0m5.875s 00:03:46.816 user 0m2.276s 00:03:46.816 sys 0m3.510s 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.816 22:54:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.817 ************************************ 00:03:46.817 END TEST no_shrink_alloc 00:03:46.817 ************************************ 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.817 22:54:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.817 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.817 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.817 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.817 22:54:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.817 00:03:46.817 real 0m24.277s 00:03:46.817 user 0m9.095s 00:03:46.817 sys 0m13.994s 00:03:46.817 22:54:39 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.817 22:54:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.817 ************************************ 00:03:46.817 END TEST hugepages 00:03:46.817 ************************************ 00:03:46.817 22:54:39 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:46.817 22:54:39 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.817 22:54:39 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.817 22:54:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.817 ************************************ 00:03:46.817 START TEST driver 00:03:46.817 ************************************ 00:03:46.817 22:54:39 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:47.075 * Looking for test storage... 00:03:47.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:47.075 22:54:39 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:47.075 22:54:39 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.075 22:54:39 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.266 22:54:43 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:51.266 22:54:43 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:51.266 22:54:43 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:51.266 22:54:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:51.266 ************************************ 00:03:51.266 START TEST guess_driver 00:03:51.266 ************************************ 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 222 > 0 )) 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:51.266 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:51.266 Looking for driver=vfio-pci 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.266 22:54:43 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.877 22:54:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.253 22:54:47 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.438 00:03:59.438 real 0m7.818s 00:03:59.438 user 0m1.933s 00:03:59.438 sys 0m3.641s 00:03:59.438 22:54:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:59.438 22:54:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 ************************************ 00:03:59.438 END TEST guess_driver 00:03:59.438 ************************************ 00:03:59.438 00:03:59.438 real 0m12.024s 00:03:59.438 user 0m3.065s 00:03:59.438 sys 0m5.954s 00:03:59.438 22:54:51 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:59.438 22:54:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 ************************************ 00:03:59.438 END TEST driver 00:03:59.438 ************************************ 00:03:59.438 22:54:51 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:59.438 22:54:51 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:59.438 22:54:51 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:59.438 22:54:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 ************************************ 00:03:59.438 START TEST devices 00:03:59.438 ************************************ 00:03:59.438 22:54:51 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:59.438 * Looking for test storage... 00:03:59.438 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:59.438 22:54:51 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:59.438 22:54:51 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:59.438 22:54:51 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.438 22:54:51 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.970 22:54:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:01.970 22:54:54 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:01.970 22:54:54 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:01.970 22:54:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:01.970 22:54:54 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:01.970 22:54:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:01.970 22:54:54 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.971 22:54:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:01.971 22:54:54 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:01.971 No valid GPT data, bailing 00:04:01.971 22:54:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:01.971 22:54:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:01.971 22:54:54 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:01.971 22:54:54 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:01.971 22:54:54 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:01.971 22:54:54 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:01.971 22:54:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.971 ************************************ 00:04:01.971 START TEST nvme_mount 00:04:01.971 ************************************ 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.971 22:54:54 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:03.349 Creating new GPT entries in memory. 00:04:03.349 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:03.349 other utilities. 00:04:03.349 22:54:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:03.349 22:54:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.349 22:54:55 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.349 22:54:55 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.349 22:54:55 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:04.284 Creating new GPT entries in memory. 00:04:04.284 The operation has completed successfully. 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 720876 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.284 22:54:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.568 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.569 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.569 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.569 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.569 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.569 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.569 22:54:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.103 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.104 22:55:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.392 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.392 00:04:13.392 real 0m10.980s 00:04:13.392 user 0m3.145s 00:04:13.392 sys 0m5.619s 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.392 22:55:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.392 ************************************ 00:04:13.392 END TEST nvme_mount 00:04:13.392 ************************************ 00:04:13.392 22:55:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.392 22:55:05 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.392 22:55:05 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.392 22:55:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.392 ************************************ 00:04:13.392 START TEST dm_mount 00:04:13.392 ************************************ 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.392 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.393 22:55:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:14.330 Creating new GPT entries in memory. 00:04:14.330 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.330 other utilities. 00:04:14.330 22:55:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.330 22:55:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.330 22:55:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.330 22:55:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.330 22:55:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.267 Creating new GPT entries in memory. 00:04:15.267 The operation has completed successfully. 00:04:15.267 22:55:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:15.267 22:55:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.267 22:55:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.267 22:55:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.267 22:55:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:16.204 The operation has completed successfully. 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 725350 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:16.204 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.205 22:55:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.494 22:55:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:22.052 22:55:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:22.052 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:22.052 00:04:22.052 real 0m8.858s 00:04:22.052 user 0m2.031s 00:04:22.052 sys 0m3.755s 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.052 22:55:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:22.052 ************************************ 00:04:22.052 END TEST dm_mount 00:04:22.052 ************************************ 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.052 22:55:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.340 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.340 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.340 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.340 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.340 22:55:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:22.340 00:04:22.340 real 0m23.252s 00:04:22.340 user 0m6.269s 00:04:22.340 sys 0m11.450s 00:04:22.340 22:55:14 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.340 22:55:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.340 ************************************ 00:04:22.340 END TEST devices 00:04:22.340 ************************************ 00:04:22.340 00:04:22.340 real 1m21.907s 00:04:22.340 user 0m25.863s 00:04:22.340 sys 0m44.452s 00:04:22.340 22:55:14 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.340 22:55:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.340 ************************************ 00:04:22.340 END TEST setup.sh 00:04:22.340 ************************************ 00:04:22.340 22:55:14 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:25.632 Hugepages 00:04:25.632 node hugesize free / total 00:04:25.632 node0 1048576kB 0 / 0 00:04:25.632 node0 2048kB 2048 / 2048 00:04:25.632 node1 1048576kB 0 / 0 00:04:25.632 node1 2048kB 0 / 0 00:04:25.632 00:04:25.632 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.632 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:25.632 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:25.632 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:25.632 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:25.632 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:25.632 22:55:17 -- spdk/autotest.sh@130 -- # uname -s 00:04:25.632 22:55:17 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:25.632 22:55:17 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:25.632 22:55:17 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:28.168 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:28.168 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.073 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.073 22:55:21 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:31.009 22:55:22 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:31.009 22:55:22 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:31.009 22:55:22 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.009 22:55:22 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:31.009 22:55:22 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:31.009 22:55:22 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:31.009 22:55:22 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.009 22:55:22 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.009 22:55:22 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:31.009 22:55:23 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:31.009 22:55:23 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5f:00.0 00:04:31.009 22:55:23 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.543 Waiting for block devices as requested 00:04:33.803 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:33.803 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:33.803 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:34.062 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:34.062 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:34.062 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:34.062 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:34.322 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:34.322 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:34.322 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:34.581 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:34.581 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:34.581 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:34.581 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:34.840 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:34.840 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:34.840 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:35.099 22:55:27 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:35.099 22:55:27 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1501 -- # grep 0000:5f:00.0/nvme/nvme 00:04:35.099 22:55:27 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:35.099 22:55:27 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:35.099 22:55:27 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:35.099 22:55:27 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:35.099 22:55:27 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:04:35.099 22:55:27 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:35.099 22:55:27 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:35.099 22:55:27 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:35.099 22:55:27 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:35.099 22:55:27 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:35.099 22:55:27 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:35.099 22:55:27 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:35.099 22:55:27 -- common/autotest_common.sh@1556 -- # continue 00:04:35.099 22:55:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:35.099 22:55:27 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:35.099 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:35.099 22:55:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:35.099 22:55:27 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:35.099 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:35.099 22:55:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:38.387 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.387 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.766 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.766 22:55:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:39.766 22:55:31 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:39.766 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.766 22:55:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:39.766 22:55:31 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:39.766 22:55:31 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:39.766 22:55:31 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:39.766 22:55:31 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:39.766 22:55:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:39.766 22:55:31 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:39.766 22:55:31 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:39.766 22:55:31 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.766 22:55:31 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:39.766 22:55:31 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:39.766 22:55:31 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:39.766 22:55:31 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5f:00.0 00:04:39.766 22:55:31 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:39.766 22:55:31 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:39.766 22:55:31 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:04:39.766 22:55:31 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:39.766 22:55:31 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:04:39.766 22:55:31 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:5f:00.0 00:04:39.766 22:55:31 -- common/autotest_common.sh@1591 -- # [[ -z 0000:5f:00.0 ]] 00:04:39.766 22:55:31 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=735204 00:04:39.766 22:55:31 -- common/autotest_common.sh@1597 -- # waitforlisten 735204 00:04:39.766 22:55:31 -- common/autotest_common.sh@830 -- # '[' -z 735204 ']' 00:04:39.766 22:55:31 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.766 22:55:31 -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:39.766 22:55:31 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.766 22:55:31 -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:39.766 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.766 22:55:31 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.766 [2024-06-07 22:55:32.012690] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:04:39.766 [2024-06-07 22:55:32.012734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735204 ] 00:04:39.766 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.025 [2024-06-07 22:55:32.069518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.025 [2024-06-07 22:55:32.148764] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.593 22:55:32 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:40.593 22:55:32 -- common/autotest_common.sh@863 -- # return 0 00:04:40.593 22:55:32 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:04:40.593 22:55:32 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:04:40.593 22:55:32 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:43.882 nvme0n1 00:04:43.882 22:55:35 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:43.882 [2024-06-07 22:55:35.931401] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:43.882 request: 00:04:43.882 { 00:04:43.882 "nvme_ctrlr_name": "nvme0", 00:04:43.882 "password": "test", 00:04:43.882 "method": "bdev_nvme_opal_revert", 00:04:43.882 "req_id": 1 00:04:43.882 } 00:04:43.882 Got JSON-RPC error response 00:04:43.882 response: 00:04:43.882 { 00:04:43.882 "code": -32602, 00:04:43.882 "message": "Invalid parameters" 00:04:43.882 } 00:04:43.882 22:55:35 -- common/autotest_common.sh@1603 -- # true 00:04:43.882 22:55:35 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:04:43.882 22:55:35 -- common/autotest_common.sh@1607 -- # killprocess 735204 00:04:43.882 22:55:35 -- common/autotest_common.sh@949 -- # '[' -z 735204 ']' 00:04:43.882 22:55:35 -- common/autotest_common.sh@953 -- # kill -0 735204 00:04:43.882 22:55:35 -- common/autotest_common.sh@954 -- # uname 00:04:43.882 22:55:35 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:43.882 22:55:35 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 735204 00:04:43.882 22:55:35 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:43.882 22:55:35 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:43.882 22:55:35 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 735204' 00:04:43.882 killing process with pid 735204 00:04:43.882 22:55:35 -- common/autotest_common.sh@968 -- # kill 735204 00:04:43.882 22:55:35 -- common/autotest_common.sh@973 -- # wait 735204 00:04:46.420 22:55:38 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:46.420 22:55:38 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:46.420 22:55:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.420 22:55:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:46.420 22:55:38 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:46.420 22:55:38 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:46.420 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:04:46.420 22:55:38 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:46.421 22:55:38 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:46.421 22:55:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:46.421 22:55:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:46.421 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:04:46.421 ************************************ 00:04:46.421 START TEST env 00:04:46.421 ************************************ 00:04:46.421 22:55:38 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:46.421 * Looking for test storage... 00:04:46.421 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:46.421 22:55:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:46.421 22:55:38 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:46.421 22:55:38 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:46.421 22:55:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.421 ************************************ 00:04:46.421 START TEST env_memory 00:04:46.421 ************************************ 00:04:46.421 22:55:38 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:46.421 00:04:46.421 00:04:46.421 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.421 http://cunit.sourceforge.net/ 00:04:46.421 00:04:46.421 00:04:46.421 Suite: memory 00:04:46.421 Test: alloc and free memory map ...[2024-06-07 22:55:38.359379] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.421 passed 00:04:46.421 Test: mem map translation ...[2024-06-07 22:55:38.377118] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.421 [2024-06-07 22:55:38.377131] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.421 [2024-06-07 22:55:38.377164] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.421 [2024-06-07 22:55:38.377170] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.421 passed 00:04:46.421 Test: mem map registration ...[2024-06-07 22:55:38.412681] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:46.421 [2024-06-07 22:55:38.412694] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:46.421 passed 00:04:46.421 Test: mem map adjacent registrations ...passed 00:04:46.421 00:04:46.421 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.421 suites 1 1 n/a 0 0 00:04:46.421 tests 4 4 4 0 0 00:04:46.421 asserts 152 152 152 0 n/a 00:04:46.421 00:04:46.421 Elapsed time = 0.134 seconds 00:04:46.421 00:04:46.421 real 0m0.146s 00:04:46.421 user 0m0.136s 00:04:46.421 sys 0m0.009s 00:04:46.421 22:55:38 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:46.421 22:55:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.421 ************************************ 00:04:46.421 END TEST env_memory 00:04:46.421 ************************************ 00:04:46.421 22:55:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:46.421 22:55:38 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:46.421 22:55:38 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:46.421 22:55:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.421 ************************************ 00:04:46.421 START TEST env_vtophys 00:04:46.421 ************************************ 00:04:46.421 22:55:38 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:46.421 EAL: lib.eal log level changed from notice to debug 00:04:46.421 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.421 EAL: Detected lcore 1 as core 1 on socket 0 00:04:46.421 EAL: Detected lcore 2 as core 2 on socket 0 00:04:46.421 EAL: Detected lcore 3 as core 3 on socket 0 00:04:46.421 EAL: Detected lcore 4 as core 4 on socket 0 00:04:46.421 EAL: Detected lcore 5 as core 5 on socket 0 00:04:46.421 EAL: Detected lcore 6 as core 6 on socket 0 00:04:46.421 EAL: Detected lcore 7 as core 9 on socket 0 00:04:46.421 EAL: Detected lcore 8 as core 10 on socket 0 00:04:46.421 EAL: Detected lcore 9 as core 11 on socket 0 00:04:46.421 EAL: Detected lcore 10 as core 12 on socket 0 00:04:46.421 EAL: Detected lcore 11 as core 13 on socket 0 00:04:46.421 EAL: Detected lcore 12 as core 16 on socket 0 00:04:46.421 EAL: Detected lcore 13 as core 17 on socket 0 00:04:46.421 EAL: Detected lcore 14 as core 18 on socket 0 00:04:46.421 EAL: Detected lcore 15 as core 19 on socket 0 00:04:46.421 EAL: Detected lcore 16 as core 20 on socket 0 00:04:46.421 EAL: Detected lcore 17 as core 21 on socket 0 00:04:46.421 EAL: Detected lcore 18 as core 24 on socket 0 00:04:46.421 EAL: Detected lcore 19 as core 25 on socket 0 00:04:46.421 EAL: Detected lcore 20 as core 26 on socket 0 00:04:46.421 EAL: Detected lcore 21 as core 27 on socket 0 00:04:46.421 EAL: Detected lcore 22 as core 28 on socket 0 00:04:46.421 EAL: Detected lcore 23 as core 29 on socket 0 00:04:46.421 EAL: Detected lcore 24 as core 0 on socket 1 00:04:46.421 EAL: Detected lcore 25 as core 1 on socket 1 00:04:46.421 EAL: Detected lcore 26 as core 2 on socket 1 00:04:46.421 EAL: Detected lcore 27 as core 3 on socket 1 00:04:46.421 EAL: Detected lcore 28 as core 4 on socket 1 00:04:46.421 EAL: Detected lcore 29 as core 5 on socket 1 00:04:46.421 EAL: Detected lcore 30 as core 6 on socket 1 00:04:46.421 EAL: Detected lcore 31 as core 8 on socket 1 00:04:46.421 EAL: Detected lcore 32 as core 9 on socket 1 00:04:46.421 EAL: Detected lcore 33 as core 10 on socket 1 00:04:46.421 EAL: Detected lcore 34 as core 11 on socket 1 00:04:46.421 EAL: Detected lcore 35 as core 12 on socket 1 00:04:46.421 EAL: Detected lcore 36 as core 13 on socket 1 00:04:46.421 EAL: Detected lcore 37 as core 16 on socket 1 00:04:46.421 EAL: Detected lcore 38 as core 17 on socket 1 00:04:46.421 EAL: Detected lcore 39 as core 18 on socket 1 00:04:46.421 EAL: Detected lcore 40 as core 19 on socket 1 00:04:46.421 EAL: Detected lcore 41 as core 20 on socket 1 00:04:46.421 EAL: Detected lcore 42 as core 21 on socket 1 00:04:46.421 EAL: Detected lcore 43 as core 25 on socket 1 00:04:46.421 EAL: Detected lcore 44 as core 26 on socket 1 00:04:46.421 EAL: Detected lcore 45 as core 27 on socket 1 00:04:46.421 EAL: Detected lcore 46 as core 28 on socket 1 00:04:46.421 EAL: Detected lcore 47 as core 29 on socket 1 00:04:46.421 EAL: Detected lcore 48 as core 0 on socket 0 00:04:46.421 EAL: Detected lcore 49 as core 1 on socket 0 00:04:46.421 EAL: Detected lcore 50 as core 2 on socket 0 00:04:46.421 EAL: Detected lcore 51 as core 3 on socket 0 00:04:46.421 EAL: Detected lcore 52 as core 4 on socket 0 00:04:46.421 EAL: Detected lcore 53 as core 5 on socket 0 00:04:46.421 EAL: Detected lcore 54 as core 6 on socket 0 00:04:46.421 EAL: Detected lcore 55 as core 9 on socket 0 00:04:46.421 EAL: Detected lcore 56 as core 10 on socket 0 00:04:46.421 EAL: Detected lcore 57 as core 11 on socket 0 00:04:46.421 EAL: Detected lcore 58 as core 12 on socket 0 00:04:46.421 EAL: Detected lcore 59 as core 13 on socket 0 00:04:46.421 EAL: Detected lcore 60 as core 16 on socket 0 00:04:46.421 EAL: Detected lcore 61 as core 17 on socket 0 00:04:46.421 EAL: Detected lcore 62 as core 18 on socket 0 00:04:46.421 EAL: Detected lcore 63 as core 19 on socket 0 00:04:46.421 EAL: Detected lcore 64 as core 20 on socket 0 00:04:46.421 EAL: Detected lcore 65 as core 21 on socket 0 00:04:46.421 EAL: Detected lcore 66 as core 24 on socket 0 00:04:46.421 EAL: Detected lcore 67 as core 25 on socket 0 00:04:46.421 EAL: Detected lcore 68 as core 26 on socket 0 00:04:46.421 EAL: Detected lcore 69 as core 27 on socket 0 00:04:46.421 EAL: Detected lcore 70 as core 28 on socket 0 00:04:46.421 EAL: Detected lcore 71 as core 29 on socket 0 00:04:46.421 EAL: Detected lcore 72 as core 0 on socket 1 00:04:46.421 EAL: Detected lcore 73 as core 1 on socket 1 00:04:46.421 EAL: Detected lcore 74 as core 2 on socket 1 00:04:46.421 EAL: Detected lcore 75 as core 3 on socket 1 00:04:46.421 EAL: Detected lcore 76 as core 4 on socket 1 00:04:46.421 EAL: Detected lcore 77 as core 5 on socket 1 00:04:46.421 EAL: Detected lcore 78 as core 6 on socket 1 00:04:46.421 EAL: Detected lcore 79 as core 8 on socket 1 00:04:46.421 EAL: Detected lcore 80 as core 9 on socket 1 00:04:46.421 EAL: Detected lcore 81 as core 10 on socket 1 00:04:46.421 EAL: Detected lcore 82 as core 11 on socket 1 00:04:46.421 EAL: Detected lcore 83 as core 12 on socket 1 00:04:46.421 EAL: Detected lcore 84 as core 13 on socket 1 00:04:46.421 EAL: Detected lcore 85 as core 16 on socket 1 00:04:46.421 EAL: Detected lcore 86 as core 17 on socket 1 00:04:46.421 EAL: Detected lcore 87 as core 18 on socket 1 00:04:46.421 EAL: Detected lcore 88 as core 19 on socket 1 00:04:46.421 EAL: Detected lcore 89 as core 20 on socket 1 00:04:46.421 EAL: Detected lcore 90 as core 21 on socket 1 00:04:46.421 EAL: Detected lcore 91 as core 25 on socket 1 00:04:46.421 EAL: Detected lcore 92 as core 26 on socket 1 00:04:46.421 EAL: Detected lcore 93 as core 27 on socket 1 00:04:46.421 EAL: Detected lcore 94 as core 28 on socket 1 00:04:46.421 EAL: Detected lcore 95 as core 29 on socket 1 00:04:46.421 EAL: Maximum logical cores by configuration: 128 00:04:46.421 EAL: Detected CPU lcores: 96 00:04:46.421 EAL: Detected NUMA nodes: 2 00:04:46.421 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.421 EAL: Detected shared linkage of DPDK 00:04:46.421 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.421 EAL: Bus pci wants IOVA as 'DC' 00:04:46.421 EAL: Buses did not request a specific IOVA mode. 00:04:46.421 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:46.421 EAL: Selected IOVA mode 'VA' 00:04:46.422 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.422 EAL: Probing VFIO support... 00:04:46.422 EAL: IOMMU type 1 (Type 1) is supported 00:04:46.422 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:46.422 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:46.422 EAL: VFIO support initialized 00:04:46.422 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.422 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.422 EAL: Setting up physically contiguous memory... 00:04:46.422 EAL: Setting maximum number of open files to 524288 00:04:46.422 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.422 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:46.422 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.422 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:46.422 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.422 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:46.422 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.422 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.422 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:46.422 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:46.422 EAL: Hugepages will be freed exactly as allocated. 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: TSC frequency is ~2100000 KHz 00:04:46.422 EAL: Main lcore 0 is ready (tid=7f5bb466da00;cpuset=[0]) 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 0 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.422 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.422 00:04:46.422 00:04:46.422 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.422 http://cunit.sourceforge.net/ 00:04:46.422 00:04:46.422 00:04:46.422 Suite: components_suite 00:04:46.422 Test: vtophys_malloc_test ...passed 00:04:46.422 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 4MB 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was shrunk by 4MB 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 6MB 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was shrunk by 6MB 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 10MB 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was shrunk by 10MB 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 18MB 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was shrunk by 18MB 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 34MB 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was shrunk by 34MB 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 66MB 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was shrunk by 66MB 00:04:46.422 EAL: Trying to obtain current memory policy. 00:04:46.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.422 EAL: Restoring previous memory policy: 4 00:04:46.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.422 EAL: request: mp_malloc_sync 00:04:46.422 EAL: No shared files mode enabled, IPC is disabled 00:04:46.422 EAL: Heap on socket 0 was expanded by 130MB 00:04:46.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.726 EAL: request: mp_malloc_sync 00:04:46.726 EAL: No shared files mode enabled, IPC is disabled 00:04:46.726 EAL: Heap on socket 0 was shrunk by 130MB 00:04:46.726 EAL: Trying to obtain current memory policy. 00:04:46.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.726 EAL: Restoring previous memory policy: 4 00:04:46.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.726 EAL: request: mp_malloc_sync 00:04:46.726 EAL: No shared files mode enabled, IPC is disabled 00:04:46.726 EAL: Heap on socket 0 was expanded by 258MB 00:04:46.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.726 EAL: request: mp_malloc_sync 00:04:46.726 EAL: No shared files mode enabled, IPC is disabled 00:04:46.726 EAL: Heap on socket 0 was shrunk by 258MB 00:04:46.726 EAL: Trying to obtain current memory policy. 00:04:46.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.726 EAL: Restoring previous memory policy: 4 00:04:46.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.726 EAL: request: mp_malloc_sync 00:04:46.726 EAL: No shared files mode enabled, IPC is disabled 00:04:46.726 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.985 EAL: request: mp_malloc_sync 00:04:46.985 EAL: No shared files mode enabled, IPC is disabled 00:04:46.985 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.985 EAL: Trying to obtain current memory policy. 00:04:46.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.244 EAL: Restoring previous memory policy: 4 00:04:47.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.244 EAL: request: mp_malloc_sync 00:04:47.244 EAL: No shared files mode enabled, IPC is disabled 00:04:47.244 EAL: Heap on socket 0 was expanded by 1026MB 00:04:47.244 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.504 EAL: request: mp_malloc_sync 00:04:47.504 EAL: No shared files mode enabled, IPC is disabled 00:04:47.504 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:47.504 passed 00:04:47.504 00:04:47.504 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.504 suites 1 1 n/a 0 0 00:04:47.504 tests 2 2 2 0 0 00:04:47.504 asserts 497 497 497 0 n/a 00:04:47.504 00:04:47.504 Elapsed time = 0.961 seconds 00:04:47.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.504 EAL: request: mp_malloc_sync 00:04:47.504 EAL: No shared files mode enabled, IPC is disabled 00:04:47.504 EAL: Heap on socket 0 was shrunk by 2MB 00:04:47.504 EAL: No shared files mode enabled, IPC is disabled 00:04:47.504 EAL: No shared files mode enabled, IPC is disabled 00:04:47.504 EAL: No shared files mode enabled, IPC is disabled 00:04:47.504 00:04:47.504 real 0m1.079s 00:04:47.504 user 0m0.628s 00:04:47.504 sys 0m0.425s 00:04:47.504 22:55:39 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:47.504 22:55:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:47.504 ************************************ 00:04:47.504 END TEST env_vtophys 00:04:47.504 ************************************ 00:04:47.504 22:55:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.504 22:55:39 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:47.504 22:55:39 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:47.504 22:55:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.504 ************************************ 00:04:47.504 START TEST env_pci 00:04:47.504 ************************************ 00:04:47.504 22:55:39 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.504 00:04:47.504 00:04:47.504 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.504 http://cunit.sourceforge.net/ 00:04:47.504 00:04:47.504 00:04:47.504 Suite: pci 00:04:47.504 Test: pci_hook ...[2024-06-07 22:55:39.694077] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 736504 has claimed it 00:04:47.504 EAL: Cannot find device (10000:00:01.0) 00:04:47.504 EAL: Failed to attach device on primary process 00:04:47.504 passed 00:04:47.504 00:04:47.504 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.504 suites 1 1 n/a 0 0 00:04:47.504 tests 1 1 1 0 0 00:04:47.504 asserts 25 25 25 0 n/a 00:04:47.504 00:04:47.504 Elapsed time = 0.031 seconds 00:04:47.504 00:04:47.504 real 0m0.051s 00:04:47.504 user 0m0.014s 00:04:47.504 sys 0m0.037s 00:04:47.504 22:55:39 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:47.504 22:55:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:47.504 ************************************ 00:04:47.504 END TEST env_pci 00:04:47.504 ************************************ 00:04:47.504 22:55:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.504 22:55:39 env -- env/env.sh@15 -- # uname 00:04:47.504 22:55:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.504 22:55:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.504 22:55:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.504 22:55:39 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:47.504 22:55:39 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:47.504 22:55:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.764 ************************************ 00:04:47.764 START TEST env_dpdk_post_init 00:04:47.764 ************************************ 00:04:47.764 22:55:39 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.764 EAL: Detected CPU lcores: 96 00:04:47.764 EAL: Detected NUMA nodes: 2 00:04:47.764 EAL: Detected shared linkage of DPDK 00:04:47.764 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.764 EAL: Selected IOVA mode 'VA' 00:04:47.764 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.764 EAL: VFIO support initialized 00:04:47.764 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.764 EAL: Using IOMMU type 1 (Type 1) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:47.764 EAL: Ignore mapping IO port bar(1) 00:04:47.764 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:48.704 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:48.704 EAL: Ignore mapping IO port bar(1) 00:04:48.704 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:52.896 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:52.896 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:04:52.896 Starting DPDK initialization... 00:04:52.896 Starting SPDK post initialization... 00:04:52.896 SPDK NVMe probe 00:04:52.896 Attaching to 0000:5f:00.0 00:04:52.896 Attached to 0000:5f:00.0 00:04:52.896 Cleaning up... 00:04:52.896 00:04:52.896 real 0m4.917s 00:04:52.896 user 0m3.816s 00:04:52.896 sys 0m0.168s 00:04:52.896 22:55:44 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.896 22:55:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 END TEST env_dpdk_post_init 00:04:52.896 ************************************ 00:04:52.896 22:55:44 env -- env/env.sh@26 -- # uname 00:04:52.896 22:55:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:52.896 22:55:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.896 22:55:44 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.896 22:55:44 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.896 22:55:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 START TEST env_mem_callbacks 00:04:52.896 ************************************ 00:04:52.896 22:55:44 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.896 EAL: Detected CPU lcores: 96 00:04:52.896 EAL: Detected NUMA nodes: 2 00:04:52.896 EAL: Detected shared linkage of DPDK 00:04:52.896 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.896 EAL: Selected IOVA mode 'VA' 00:04:52.896 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.896 EAL: VFIO support initialized 00:04:52.896 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.896 00:04:52.896 00:04:52.896 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.896 http://cunit.sourceforge.net/ 00:04:52.896 00:04:52.896 00:04:52.896 Suite: memory 00:04:52.896 Test: test ... 00:04:52.896 register 0x200000200000 2097152 00:04:52.896 malloc 3145728 00:04:52.896 register 0x200000400000 4194304 00:04:52.896 buf 0x200000500000 len 3145728 PASSED 00:04:52.896 malloc 64 00:04:52.896 buf 0x2000004fff40 len 64 PASSED 00:04:52.896 malloc 4194304 00:04:52.896 register 0x200000800000 6291456 00:04:52.896 buf 0x200000a00000 len 4194304 PASSED 00:04:52.896 free 0x200000500000 3145728 00:04:52.896 free 0x2000004fff40 64 00:04:52.896 unregister 0x200000400000 4194304 PASSED 00:04:52.896 free 0x200000a00000 4194304 00:04:52.896 unregister 0x200000800000 6291456 PASSED 00:04:52.896 malloc 8388608 00:04:52.896 register 0x200000400000 10485760 00:04:52.896 buf 0x200000600000 len 8388608 PASSED 00:04:52.896 free 0x200000600000 8388608 00:04:52.896 unregister 0x200000400000 10485760 PASSED 00:04:52.896 passed 00:04:52.896 00:04:52.896 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.896 suites 1 1 n/a 0 0 00:04:52.896 tests 1 1 1 0 0 00:04:52.896 asserts 15 15 15 0 n/a 00:04:52.896 00:04:52.896 Elapsed time = 0.006 seconds 00:04:52.896 00:04:52.896 real 0m0.059s 00:04:52.896 user 0m0.019s 00:04:52.896 sys 0m0.040s 00:04:52.896 22:55:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.896 22:55:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 END TEST env_mem_callbacks 00:04:52.896 ************************************ 00:04:52.896 00:04:52.896 real 0m6.697s 00:04:52.896 user 0m4.785s 00:04:52.896 sys 0m0.982s 00:04:52.896 22:55:44 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.896 22:55:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 END TEST env 00:04:52.896 ************************************ 00:04:52.896 22:55:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.896 22:55:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.896 22:55:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.896 22:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 START TEST rpc 00:04:52.896 ************************************ 00:04:52.896 22:55:44 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.896 * Looking for test storage... 00:04:52.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:52.896 22:55:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=737544 00:04:52.896 22:55:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:52.896 22:55:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.896 22:55:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 737544 00:04:52.896 22:55:45 rpc -- common/autotest_common.sh@830 -- # '[' -z 737544 ']' 00:04:52.896 22:55:45 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.896 22:55:45 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:52.896 22:55:45 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.896 22:55:45 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:52.896 22:55:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 [2024-06-07 22:55:45.099217] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:04:52.896 [2024-06-07 22:55:45.099275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737544 ] 00:04:52.896 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.896 [2024-06-07 22:55:45.161314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.155 [2024-06-07 22:55:45.243784] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:53.155 [2024-06-07 22:55:45.243827] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 737544' to capture a snapshot of events at runtime. 00:04:53.155 [2024-06-07 22:55:45.243834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:53.155 [2024-06-07 22:55:45.243840] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:53.155 [2024-06-07 22:55:45.243845] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid737544 for offline analysis/debug. 00:04:53.155 [2024-06-07 22:55:45.243862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.721 22:55:45 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:53.721 22:55:45 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:53.721 22:55:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:53.721 22:55:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:53.721 22:55:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:53.722 22:55:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:53.722 22:55:45 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.722 22:55:45 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.722 22:55:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.722 ************************************ 00:04:53.722 START TEST rpc_integrity 00:04:53.722 ************************************ 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:53.722 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.722 22:55:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.980 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.980 { 00:04:53.980 "name": "Malloc0", 00:04:53.980 "aliases": [ 00:04:53.980 "34e52a87-2ac5-4832-aeb6-3282765da244" 00:04:53.980 ], 00:04:53.980 "product_name": "Malloc disk", 00:04:53.980 "block_size": 512, 00:04:53.980 "num_blocks": 16384, 00:04:53.980 "uuid": "34e52a87-2ac5-4832-aeb6-3282765da244", 00:04:53.980 "assigned_rate_limits": { 00:04:53.980 "rw_ios_per_sec": 0, 00:04:53.980 "rw_mbytes_per_sec": 0, 00:04:53.980 "r_mbytes_per_sec": 0, 00:04:53.980 "w_mbytes_per_sec": 0 00:04:53.980 }, 00:04:53.980 "claimed": false, 00:04:53.980 "zoned": false, 00:04:53.980 "supported_io_types": { 00:04:53.980 "read": true, 00:04:53.981 "write": true, 00:04:53.981 "unmap": true, 00:04:53.981 "write_zeroes": true, 00:04:53.981 "flush": true, 00:04:53.981 "reset": true, 00:04:53.981 "compare": false, 00:04:53.981 "compare_and_write": false, 00:04:53.981 "abort": true, 00:04:53.981 "nvme_admin": false, 00:04:53.981 "nvme_io": false 00:04:53.981 }, 00:04:53.981 "memory_domains": [ 00:04:53.981 { 00:04:53.981 "dma_device_id": "system", 00:04:53.981 "dma_device_type": 1 00:04:53.981 }, 00:04:53.981 { 00:04:53.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.981 "dma_device_type": 2 00:04:53.981 } 00:04:53.981 ], 00:04:53.981 "driver_specific": {} 00:04:53.981 } 00:04:53.981 ]' 00:04:53.981 22:55:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.981 [2024-06-07 22:55:46.049346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:53.981 [2024-06-07 22:55:46.049376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.981 [2024-06-07 22:55:46.049387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16c4d30 00:04:53.981 [2024-06-07 22:55:46.049393] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.981 [2024-06-07 22:55:46.050370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.981 [2024-06-07 22:55:46.050390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.981 Passthru0 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.981 { 00:04:53.981 "name": "Malloc0", 00:04:53.981 "aliases": [ 00:04:53.981 "34e52a87-2ac5-4832-aeb6-3282765da244" 00:04:53.981 ], 00:04:53.981 "product_name": "Malloc disk", 00:04:53.981 "block_size": 512, 00:04:53.981 "num_blocks": 16384, 00:04:53.981 "uuid": "34e52a87-2ac5-4832-aeb6-3282765da244", 00:04:53.981 "assigned_rate_limits": { 00:04:53.981 "rw_ios_per_sec": 0, 00:04:53.981 "rw_mbytes_per_sec": 0, 00:04:53.981 "r_mbytes_per_sec": 0, 00:04:53.981 "w_mbytes_per_sec": 0 00:04:53.981 }, 00:04:53.981 "claimed": true, 00:04:53.981 "claim_type": "exclusive_write", 00:04:53.981 "zoned": false, 00:04:53.981 "supported_io_types": { 00:04:53.981 "read": true, 00:04:53.981 "write": true, 00:04:53.981 "unmap": true, 00:04:53.981 "write_zeroes": true, 00:04:53.981 "flush": true, 00:04:53.981 "reset": true, 00:04:53.981 "compare": false, 00:04:53.981 "compare_and_write": false, 00:04:53.981 "abort": true, 00:04:53.981 "nvme_admin": false, 00:04:53.981 "nvme_io": false 00:04:53.981 }, 00:04:53.981 "memory_domains": [ 00:04:53.981 { 00:04:53.981 "dma_device_id": "system", 00:04:53.981 "dma_device_type": 1 00:04:53.981 }, 00:04:53.981 { 00:04:53.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.981 "dma_device_type": 2 00:04:53.981 } 00:04:53.981 ], 00:04:53.981 "driver_specific": {} 00:04:53.981 }, 00:04:53.981 { 00:04:53.981 "name": "Passthru0", 00:04:53.981 "aliases": [ 00:04:53.981 "eea8d476-0e6e-5d6e-8668-2f2416f70472" 00:04:53.981 ], 00:04:53.981 "product_name": "passthru", 00:04:53.981 "block_size": 512, 00:04:53.981 "num_blocks": 16384, 00:04:53.981 "uuid": "eea8d476-0e6e-5d6e-8668-2f2416f70472", 00:04:53.981 "assigned_rate_limits": { 00:04:53.981 "rw_ios_per_sec": 0, 00:04:53.981 "rw_mbytes_per_sec": 0, 00:04:53.981 "r_mbytes_per_sec": 0, 00:04:53.981 "w_mbytes_per_sec": 0 00:04:53.981 }, 00:04:53.981 "claimed": false, 00:04:53.981 "zoned": false, 00:04:53.981 "supported_io_types": { 00:04:53.981 "read": true, 00:04:53.981 "write": true, 00:04:53.981 "unmap": true, 00:04:53.981 "write_zeroes": true, 00:04:53.981 "flush": true, 00:04:53.981 "reset": true, 00:04:53.981 "compare": false, 00:04:53.981 "compare_and_write": false, 00:04:53.981 "abort": true, 00:04:53.981 "nvme_admin": false, 00:04:53.981 "nvme_io": false 00:04:53.981 }, 00:04:53.981 "memory_domains": [ 00:04:53.981 { 00:04:53.981 "dma_device_id": "system", 00:04:53.981 "dma_device_type": 1 00:04:53.981 }, 00:04:53.981 { 00:04:53.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.981 "dma_device_type": 2 00:04:53.981 } 00:04:53.981 ], 00:04:53.981 "driver_specific": { 00:04:53.981 "passthru": { 00:04:53.981 "name": "Passthru0", 00:04:53.981 "base_bdev_name": "Malloc0" 00:04:53.981 } 00:04:53.981 } 00:04:53.981 } 00:04:53.981 ]' 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.981 22:55:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.981 00:04:53.981 real 0m0.278s 00:04:53.981 user 0m0.189s 00:04:53.981 sys 0m0.027s 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.981 22:55:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.981 ************************************ 00:04:53.981 END TEST rpc_integrity 00:04:53.981 ************************************ 00:04:53.981 22:55:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:53.981 22:55:46 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.981 22:55:46 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.981 22:55:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.240 ************************************ 00:04:54.240 START TEST rpc_plugins 00:04:54.240 ************************************ 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:54.240 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.240 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:54.240 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.240 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.240 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:54.240 { 00:04:54.240 "name": "Malloc1", 00:04:54.240 "aliases": [ 00:04:54.240 "5fabb011-95ca-4bf2-9b28-299b19081f43" 00:04:54.240 ], 00:04:54.240 "product_name": "Malloc disk", 00:04:54.240 "block_size": 4096, 00:04:54.240 "num_blocks": 256, 00:04:54.241 "uuid": "5fabb011-95ca-4bf2-9b28-299b19081f43", 00:04:54.241 "assigned_rate_limits": { 00:04:54.241 "rw_ios_per_sec": 0, 00:04:54.241 "rw_mbytes_per_sec": 0, 00:04:54.241 "r_mbytes_per_sec": 0, 00:04:54.241 "w_mbytes_per_sec": 0 00:04:54.241 }, 00:04:54.241 "claimed": false, 00:04:54.241 "zoned": false, 00:04:54.241 "supported_io_types": { 00:04:54.241 "read": true, 00:04:54.241 "write": true, 00:04:54.241 "unmap": true, 00:04:54.241 "write_zeroes": true, 00:04:54.241 "flush": true, 00:04:54.241 "reset": true, 00:04:54.241 "compare": false, 00:04:54.241 "compare_and_write": false, 00:04:54.241 "abort": true, 00:04:54.241 "nvme_admin": false, 00:04:54.241 "nvme_io": false 00:04:54.241 }, 00:04:54.241 "memory_domains": [ 00:04:54.241 { 00:04:54.241 "dma_device_id": "system", 00:04:54.241 "dma_device_type": 1 00:04:54.241 }, 00:04:54.241 { 00:04:54.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.241 "dma_device_type": 2 00:04:54.241 } 00:04:54.241 ], 00:04:54.241 "driver_specific": {} 00:04:54.241 } 00:04:54.241 ]' 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:54.241 22:55:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:54.241 00:04:54.241 real 0m0.139s 00:04:54.241 user 0m0.089s 00:04:54.241 sys 0m0.014s 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.241 22:55:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.241 ************************************ 00:04:54.241 END TEST rpc_plugins 00:04:54.241 ************************************ 00:04:54.241 22:55:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:54.241 22:55:46 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:54.241 22:55:46 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.241 22:55:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.241 ************************************ 00:04:54.241 START TEST rpc_trace_cmd_test 00:04:54.241 ************************************ 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:54.241 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid737544", 00:04:54.241 "tpoint_group_mask": "0x8", 00:04:54.241 "iscsi_conn": { 00:04:54.241 "mask": "0x2", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "scsi": { 00:04:54.241 "mask": "0x4", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "bdev": { 00:04:54.241 "mask": "0x8", 00:04:54.241 "tpoint_mask": "0xffffffffffffffff" 00:04:54.241 }, 00:04:54.241 "nvmf_rdma": { 00:04:54.241 "mask": "0x10", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "nvmf_tcp": { 00:04:54.241 "mask": "0x20", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "ftl": { 00:04:54.241 "mask": "0x40", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "blobfs": { 00:04:54.241 "mask": "0x80", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "dsa": { 00:04:54.241 "mask": "0x200", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "thread": { 00:04:54.241 "mask": "0x400", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "nvme_pcie": { 00:04:54.241 "mask": "0x800", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "iaa": { 00:04:54.241 "mask": "0x1000", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "nvme_tcp": { 00:04:54.241 "mask": "0x2000", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "bdev_nvme": { 00:04:54.241 "mask": "0x4000", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 }, 00:04:54.241 "sock": { 00:04:54.241 "mask": "0x8000", 00:04:54.241 "tpoint_mask": "0x0" 00:04:54.241 } 00:04:54.241 }' 00:04:54.241 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:54.500 00:04:54.500 real 0m0.164s 00:04:54.500 user 0m0.137s 00:04:54.500 sys 0m0.020s 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.500 22:55:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.500 ************************************ 00:04:54.500 END TEST rpc_trace_cmd_test 00:04:54.500 ************************************ 00:04:54.500 22:55:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:54.500 22:55:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:54.500 22:55:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:54.500 22:55:46 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:54.500 22:55:46 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.500 22:55:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.500 ************************************ 00:04:54.500 START TEST rpc_daemon_integrity 00:04:54.500 ************************************ 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.500 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.758 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.758 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.759 { 00:04:54.759 "name": "Malloc2", 00:04:54.759 "aliases": [ 00:04:54.759 "4b44271d-d324-4961-8684-1f189ccf7a47" 00:04:54.759 ], 00:04:54.759 "product_name": "Malloc disk", 00:04:54.759 "block_size": 512, 00:04:54.759 "num_blocks": 16384, 00:04:54.759 "uuid": "4b44271d-d324-4961-8684-1f189ccf7a47", 00:04:54.759 "assigned_rate_limits": { 00:04:54.759 "rw_ios_per_sec": 0, 00:04:54.759 "rw_mbytes_per_sec": 0, 00:04:54.759 "r_mbytes_per_sec": 0, 00:04:54.759 "w_mbytes_per_sec": 0 00:04:54.759 }, 00:04:54.759 "claimed": false, 00:04:54.759 "zoned": false, 00:04:54.759 "supported_io_types": { 00:04:54.759 "read": true, 00:04:54.759 "write": true, 00:04:54.759 "unmap": true, 00:04:54.759 "write_zeroes": true, 00:04:54.759 "flush": true, 00:04:54.759 "reset": true, 00:04:54.759 "compare": false, 00:04:54.759 "compare_and_write": false, 00:04:54.759 "abort": true, 00:04:54.759 "nvme_admin": false, 00:04:54.759 "nvme_io": false 00:04:54.759 }, 00:04:54.759 "memory_domains": [ 00:04:54.759 { 00:04:54.759 "dma_device_id": "system", 00:04:54.759 "dma_device_type": 1 00:04:54.759 }, 00:04:54.759 { 00:04:54.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.759 "dma_device_type": 2 00:04:54.759 } 00:04:54.759 ], 00:04:54.759 "driver_specific": {} 00:04:54.759 } 00:04:54.759 ]' 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.759 [2024-06-07 22:55:46.823451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.759 [2024-06-07 22:55:46.823478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.759 [2024-06-07 22:55:46.823490] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16c61d0 00:04:54.759 [2024-06-07 22:55:46.823496] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.759 [2024-06-07 22:55:46.824453] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.759 [2024-06-07 22:55:46.824475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.759 Passthru0 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.759 { 00:04:54.759 "name": "Malloc2", 00:04:54.759 "aliases": [ 00:04:54.759 "4b44271d-d324-4961-8684-1f189ccf7a47" 00:04:54.759 ], 00:04:54.759 "product_name": "Malloc disk", 00:04:54.759 "block_size": 512, 00:04:54.759 "num_blocks": 16384, 00:04:54.759 "uuid": "4b44271d-d324-4961-8684-1f189ccf7a47", 00:04:54.759 "assigned_rate_limits": { 00:04:54.759 "rw_ios_per_sec": 0, 00:04:54.759 "rw_mbytes_per_sec": 0, 00:04:54.759 "r_mbytes_per_sec": 0, 00:04:54.759 "w_mbytes_per_sec": 0 00:04:54.759 }, 00:04:54.759 "claimed": true, 00:04:54.759 "claim_type": "exclusive_write", 00:04:54.759 "zoned": false, 00:04:54.759 "supported_io_types": { 00:04:54.759 "read": true, 00:04:54.759 "write": true, 00:04:54.759 "unmap": true, 00:04:54.759 "write_zeroes": true, 00:04:54.759 "flush": true, 00:04:54.759 "reset": true, 00:04:54.759 "compare": false, 00:04:54.759 "compare_and_write": false, 00:04:54.759 "abort": true, 00:04:54.759 "nvme_admin": false, 00:04:54.759 "nvme_io": false 00:04:54.759 }, 00:04:54.759 "memory_domains": [ 00:04:54.759 { 00:04:54.759 "dma_device_id": "system", 00:04:54.759 "dma_device_type": 1 00:04:54.759 }, 00:04:54.759 { 00:04:54.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.759 "dma_device_type": 2 00:04:54.759 } 00:04:54.759 ], 00:04:54.759 "driver_specific": {} 00:04:54.759 }, 00:04:54.759 { 00:04:54.759 "name": "Passthru0", 00:04:54.759 "aliases": [ 00:04:54.759 "445ce512-1d91-5b27-bf54-7541a9951778" 00:04:54.759 ], 00:04:54.759 "product_name": "passthru", 00:04:54.759 "block_size": 512, 00:04:54.759 "num_blocks": 16384, 00:04:54.759 "uuid": "445ce512-1d91-5b27-bf54-7541a9951778", 00:04:54.759 "assigned_rate_limits": { 00:04:54.759 "rw_ios_per_sec": 0, 00:04:54.759 "rw_mbytes_per_sec": 0, 00:04:54.759 "r_mbytes_per_sec": 0, 00:04:54.759 "w_mbytes_per_sec": 0 00:04:54.759 }, 00:04:54.759 "claimed": false, 00:04:54.759 "zoned": false, 00:04:54.759 "supported_io_types": { 00:04:54.759 "read": true, 00:04:54.759 "write": true, 00:04:54.759 "unmap": true, 00:04:54.759 "write_zeroes": true, 00:04:54.759 "flush": true, 00:04:54.759 "reset": true, 00:04:54.759 "compare": false, 00:04:54.759 "compare_and_write": false, 00:04:54.759 "abort": true, 00:04:54.759 "nvme_admin": false, 00:04:54.759 "nvme_io": false 00:04:54.759 }, 00:04:54.759 "memory_domains": [ 00:04:54.759 { 00:04:54.759 "dma_device_id": "system", 00:04:54.759 "dma_device_type": 1 00:04:54.759 }, 00:04:54.759 { 00:04:54.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.759 "dma_device_type": 2 00:04:54.759 } 00:04:54.759 ], 00:04:54.759 "driver_specific": { 00:04:54.759 "passthru": { 00:04:54.759 "name": "Passthru0", 00:04:54.759 "base_bdev_name": "Malloc2" 00:04:54.759 } 00:04:54.759 } 00:04:54.759 } 00:04:54.759 ]' 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.759 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.767 00:04:54.767 real 0m0.247s 00:04:54.767 user 0m0.162s 00:04:54.767 sys 0m0.032s 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.767 22:55:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.767 ************************************ 00:04:54.767 END TEST rpc_daemon_integrity 00:04:54.767 ************************************ 00:04:54.767 22:55:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:54.767 22:55:46 rpc -- rpc/rpc.sh@84 -- # killprocess 737544 00:04:54.767 22:55:46 rpc -- common/autotest_common.sh@949 -- # '[' -z 737544 ']' 00:04:54.767 22:55:46 rpc -- common/autotest_common.sh@953 -- # kill -0 737544 00:04:54.767 22:55:46 rpc -- common/autotest_common.sh@954 -- # uname 00:04:54.767 22:55:46 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:54.767 22:55:46 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 737544 00:04:54.767 22:55:47 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:54.767 22:55:47 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:54.767 22:55:47 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 737544' 00:04:54.767 killing process with pid 737544 00:04:54.767 22:55:47 rpc -- common/autotest_common.sh@968 -- # kill 737544 00:04:54.767 22:55:47 rpc -- common/autotest_common.sh@973 -- # wait 737544 00:04:55.335 00:04:55.335 real 0m2.373s 00:04:55.335 user 0m3.038s 00:04:55.335 sys 0m0.655s 00:04:55.335 22:55:47 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:55.335 22:55:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.335 ************************************ 00:04:55.335 END TEST rpc 00:04:55.335 ************************************ 00:04:55.335 22:55:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:55.335 22:55:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.335 22:55:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.335 22:55:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.335 ************************************ 00:04:55.335 START TEST skip_rpc 00:04:55.335 ************************************ 00:04:55.335 22:55:47 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:55.335 * Looking for test storage... 00:04:55.335 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:55.335 22:55:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:55.335 22:55:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:55.335 22:55:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:55.335 22:55:47 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.335 22:55:47 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.335 22:55:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.335 ************************************ 00:04:55.335 START TEST skip_rpc 00:04:55.335 ************************************ 00:04:55.335 22:55:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:55.335 22:55:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=738179 00:04:55.335 22:55:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.335 22:55:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:55.335 22:55:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:55.335 [2024-06-07 22:55:47.573700] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:04:55.335 [2024-06-07 22:55:47.573743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738179 ] 00:04:55.335 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.593 [2024-06-07 22:55:47.633207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.593 [2024-06-07 22:55:47.705799] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 738179 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 738179 ']' 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 738179 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 738179 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 738179' 00:05:00.861 killing process with pid 738179 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 738179 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 738179 00:05:00.861 00:05:00.861 real 0m5.363s 00:05:00.861 user 0m5.134s 00:05:00.861 sys 0m0.262s 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.861 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.861 ************************************ 00:05:00.861 END TEST skip_rpc 00:05:00.861 ************************************ 00:05:00.861 22:55:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:00.861 22:55:52 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.861 22:55:52 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.861 22:55:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.861 ************************************ 00:05:00.861 START TEST skip_rpc_with_json 00:05:00.861 ************************************ 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=739121 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 739121 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 739121 ']' 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:00.861 22:55:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.861 [2024-06-07 22:55:52.988693] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:00.861 [2024-06-07 22:55:52.988731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid739121 ] 00:05:00.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.861 [2024-06-07 22:55:53.047131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.861 [2024-06-07 22:55:53.125927] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.796 [2024-06-07 22:55:53.760426] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:01.796 request: 00:05:01.796 { 00:05:01.796 "trtype": "tcp", 00:05:01.796 "method": "nvmf_get_transports", 00:05:01.796 "req_id": 1 00:05:01.796 } 00:05:01.796 Got JSON-RPC error response 00:05:01.796 response: 00:05:01.796 { 00:05:01.796 "code": -19, 00:05:01.796 "message": "No such device" 00:05:01.796 } 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.796 [2024-06-07 22:55:53.768509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.796 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.797 22:55:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:01.797 { 00:05:01.797 "subsystems": [ 00:05:01.797 { 00:05:01.797 "subsystem": "keyring", 00:05:01.797 "config": [] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "iobuf", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "iobuf_set_options", 00:05:01.797 "params": { 00:05:01.797 "small_pool_count": 8192, 00:05:01.797 "large_pool_count": 1024, 00:05:01.797 "small_bufsize": 8192, 00:05:01.797 "large_bufsize": 135168 00:05:01.797 } 00:05:01.797 } 00:05:01.797 ] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "sock", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "sock_set_default_impl", 00:05:01.797 "params": { 00:05:01.797 "impl_name": "posix" 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "sock_impl_set_options", 00:05:01.797 "params": { 00:05:01.797 "impl_name": "ssl", 00:05:01.797 "recv_buf_size": 4096, 00:05:01.797 "send_buf_size": 4096, 00:05:01.797 "enable_recv_pipe": true, 00:05:01.797 "enable_quickack": false, 00:05:01.797 "enable_placement_id": 0, 00:05:01.797 "enable_zerocopy_send_server": true, 00:05:01.797 "enable_zerocopy_send_client": false, 00:05:01.797 "zerocopy_threshold": 0, 00:05:01.797 "tls_version": 0, 00:05:01.797 "enable_ktls": false 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "sock_impl_set_options", 00:05:01.797 "params": { 00:05:01.797 "impl_name": "posix", 00:05:01.797 "recv_buf_size": 2097152, 00:05:01.797 "send_buf_size": 2097152, 00:05:01.797 "enable_recv_pipe": true, 00:05:01.797 "enable_quickack": false, 00:05:01.797 "enable_placement_id": 0, 00:05:01.797 "enable_zerocopy_send_server": true, 00:05:01.797 "enable_zerocopy_send_client": false, 00:05:01.797 "zerocopy_threshold": 0, 00:05:01.797 "tls_version": 0, 00:05:01.797 "enable_ktls": false 00:05:01.797 } 00:05:01.797 } 00:05:01.797 ] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "vmd", 00:05:01.797 "config": [] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "accel", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "accel_set_options", 00:05:01.797 "params": { 00:05:01.797 "small_cache_size": 128, 00:05:01.797 "large_cache_size": 16, 00:05:01.797 "task_count": 2048, 00:05:01.797 "sequence_count": 2048, 00:05:01.797 "buf_count": 2048 00:05:01.797 } 00:05:01.797 } 00:05:01.797 ] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "bdev", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "bdev_set_options", 00:05:01.797 "params": { 00:05:01.797 "bdev_io_pool_size": 65535, 00:05:01.797 "bdev_io_cache_size": 256, 00:05:01.797 "bdev_auto_examine": true, 00:05:01.797 "iobuf_small_cache_size": 128, 00:05:01.797 "iobuf_large_cache_size": 16 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "bdev_raid_set_options", 00:05:01.797 "params": { 00:05:01.797 "process_window_size_kb": 1024 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "bdev_iscsi_set_options", 00:05:01.797 "params": { 00:05:01.797 "timeout_sec": 30 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "bdev_nvme_set_options", 00:05:01.797 "params": { 00:05:01.797 "action_on_timeout": "none", 00:05:01.797 "timeout_us": 0, 00:05:01.797 "timeout_admin_us": 0, 00:05:01.797 "keep_alive_timeout_ms": 10000, 00:05:01.797 "arbitration_burst": 0, 00:05:01.797 "low_priority_weight": 0, 00:05:01.797 "medium_priority_weight": 0, 00:05:01.797 "high_priority_weight": 0, 00:05:01.797 "nvme_adminq_poll_period_us": 10000, 00:05:01.797 "nvme_ioq_poll_period_us": 0, 00:05:01.797 "io_queue_requests": 0, 00:05:01.797 "delay_cmd_submit": true, 00:05:01.797 "transport_retry_count": 4, 00:05:01.797 "bdev_retry_count": 3, 00:05:01.797 "transport_ack_timeout": 0, 00:05:01.797 "ctrlr_loss_timeout_sec": 0, 00:05:01.797 "reconnect_delay_sec": 0, 00:05:01.797 "fast_io_fail_timeout_sec": 0, 00:05:01.797 "disable_auto_failback": false, 00:05:01.797 "generate_uuids": false, 00:05:01.797 "transport_tos": 0, 00:05:01.797 "nvme_error_stat": false, 00:05:01.797 "rdma_srq_size": 0, 00:05:01.797 "io_path_stat": false, 00:05:01.797 "allow_accel_sequence": false, 00:05:01.797 "rdma_max_cq_size": 0, 00:05:01.797 "rdma_cm_event_timeout_ms": 0, 00:05:01.797 "dhchap_digests": [ 00:05:01.797 "sha256", 00:05:01.797 "sha384", 00:05:01.797 "sha512" 00:05:01.797 ], 00:05:01.797 "dhchap_dhgroups": [ 00:05:01.797 "null", 00:05:01.797 "ffdhe2048", 00:05:01.797 "ffdhe3072", 00:05:01.797 "ffdhe4096", 00:05:01.797 "ffdhe6144", 00:05:01.797 "ffdhe8192" 00:05:01.797 ] 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "bdev_nvme_set_hotplug", 00:05:01.797 "params": { 00:05:01.797 "period_us": 100000, 00:05:01.797 "enable": false 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "bdev_wait_for_examine" 00:05:01.797 } 00:05:01.797 ] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "scsi", 00:05:01.797 "config": null 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "scheduler", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "framework_set_scheduler", 00:05:01.797 "params": { 00:05:01.797 "name": "static" 00:05:01.797 } 00:05:01.797 } 00:05:01.797 ] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "vhost_scsi", 00:05:01.797 "config": [] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "vhost_blk", 00:05:01.797 "config": [] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "ublk", 00:05:01.797 "config": [] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "nbd", 00:05:01.797 "config": [] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "nvmf", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "nvmf_set_config", 00:05:01.797 "params": { 00:05:01.797 "discovery_filter": "match_any", 00:05:01.797 "admin_cmd_passthru": { 00:05:01.797 "identify_ctrlr": false 00:05:01.797 } 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "nvmf_set_max_subsystems", 00:05:01.797 "params": { 00:05:01.797 "max_subsystems": 1024 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "nvmf_set_crdt", 00:05:01.797 "params": { 00:05:01.797 "crdt1": 0, 00:05:01.797 "crdt2": 0, 00:05:01.797 "crdt3": 0 00:05:01.797 } 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "method": "nvmf_create_transport", 00:05:01.797 "params": { 00:05:01.797 "trtype": "TCP", 00:05:01.797 "max_queue_depth": 128, 00:05:01.797 "max_io_qpairs_per_ctrlr": 127, 00:05:01.797 "in_capsule_data_size": 4096, 00:05:01.797 "max_io_size": 131072, 00:05:01.797 "io_unit_size": 131072, 00:05:01.797 "max_aq_depth": 128, 00:05:01.797 "num_shared_buffers": 511, 00:05:01.797 "buf_cache_size": 4294967295, 00:05:01.797 "dif_insert_or_strip": false, 00:05:01.797 "zcopy": false, 00:05:01.797 "c2h_success": true, 00:05:01.797 "sock_priority": 0, 00:05:01.797 "abort_timeout_sec": 1, 00:05:01.797 "ack_timeout": 0, 00:05:01.797 "data_wr_pool_size": 0 00:05:01.797 } 00:05:01.797 } 00:05:01.797 ] 00:05:01.797 }, 00:05:01.797 { 00:05:01.797 "subsystem": "iscsi", 00:05:01.797 "config": [ 00:05:01.797 { 00:05:01.797 "method": "iscsi_set_options", 00:05:01.797 "params": { 00:05:01.797 "node_base": "iqn.2016-06.io.spdk", 00:05:01.797 "max_sessions": 128, 00:05:01.797 "max_connections_per_session": 2, 00:05:01.797 "max_queue_depth": 64, 00:05:01.797 "default_time2wait": 2, 00:05:01.797 "default_time2retain": 20, 00:05:01.797 "first_burst_length": 8192, 00:05:01.797 "immediate_data": true, 00:05:01.797 "allow_duplicated_isid": false, 00:05:01.797 "error_recovery_level": 0, 00:05:01.797 "nop_timeout": 60, 00:05:01.797 "nop_in_interval": 30, 00:05:01.797 "disable_chap": false, 00:05:01.797 "require_chap": false, 00:05:01.797 "mutual_chap": false, 00:05:01.797 "chap_group": 0, 00:05:01.797 "max_large_datain_per_connection": 64, 00:05:01.797 "max_r2t_per_connection": 4, 00:05:01.797 "pdu_pool_size": 36864, 00:05:01.797 "immediate_data_pool_size": 16384, 00:05:01.798 "data_out_pool_size": 2048 00:05:01.798 } 00:05:01.798 } 00:05:01.798 ] 00:05:01.798 } 00:05:01.798 ] 00:05:01.798 } 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 739121 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 739121 ']' 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 739121 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 739121 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 739121' 00:05:01.798 killing process with pid 739121 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 739121 00:05:01.798 22:55:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 739121 00:05:02.056 22:55:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=739357 00:05:02.056 22:55:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.056 22:55:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 739357 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 739357 ']' 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 739357 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 739357 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 739357' 00:05:07.325 killing process with pid 739357 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 739357 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 739357 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:07.325 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:07.583 00:05:07.583 real 0m6.665s 00:05:07.583 user 0m6.489s 00:05:07.583 sys 0m0.547s 00:05:07.583 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.583 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.583 ************************************ 00:05:07.583 END TEST skip_rpc_with_json 00:05:07.583 ************************************ 00:05:07.583 22:55:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:07.584 22:55:59 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:07.584 22:55:59 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:07.584 22:55:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 ************************************ 00:05:07.584 START TEST skip_rpc_with_delay 00:05:07.584 ************************************ 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.584 [2024-06-07 22:55:59.712676] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:07.584 [2024-06-07 22:55:59.712736] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:07.584 00:05:07.584 real 0m0.058s 00:05:07.584 user 0m0.037s 00:05:07.584 sys 0m0.021s 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.584 22:55:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 ************************************ 00:05:07.584 END TEST skip_rpc_with_delay 00:05:07.584 ************************************ 00:05:07.584 22:55:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:07.584 22:55:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:07.584 22:55:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:07.584 22:55:59 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:07.584 22:55:59 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:07.584 22:55:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 ************************************ 00:05:07.584 START TEST exit_on_failed_rpc_init 00:05:07.584 ************************************ 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=740337 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 740337 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 740337 ']' 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:07.584 22:55:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 [2024-06-07 22:55:59.834095] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:07.584 [2024-06-07 22:55:59.834135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740337 ] 00:05:07.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.843 [2024-06-07 22:55:59.891637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.843 [2024-06-07 22:55:59.970765] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:08.409 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.409 [2024-06-07 22:56:00.643989] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:08.410 [2024-06-07 22:56:00.644040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740390 ] 00:05:08.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.668 [2024-06-07 22:56:00.701425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.668 [2024-06-07 22:56:00.773200] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.668 [2024-06-07 22:56:00.773267] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:08.668 [2024-06-07 22:56:00.773276] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:08.668 [2024-06-07 22:56:00.773282] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 740337 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 740337 ']' 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 740337 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 740337 00:05:08.668 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:08.669 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:08.669 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 740337' 00:05:08.669 killing process with pid 740337 00:05:08.669 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 740337 00:05:08.669 22:56:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 740337 00:05:08.927 00:05:08.927 real 0m1.401s 00:05:08.927 user 0m1.600s 00:05:08.927 sys 0m0.375s 00:05:08.927 22:56:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.927 22:56:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.927 ************************************ 00:05:08.927 END TEST exit_on_failed_rpc_init 00:05:08.927 ************************************ 00:05:09.184 22:56:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:09.184 00:05:09.184 real 0m13.819s 00:05:09.184 user 0m13.386s 00:05:09.184 sys 0m1.435s 00:05:09.184 22:56:01 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:09.184 22:56:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.184 ************************************ 00:05:09.184 END TEST skip_rpc 00:05:09.184 ************************************ 00:05:09.184 22:56:01 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:09.184 22:56:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:09.184 22:56:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:09.184 22:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:09.184 ************************************ 00:05:09.184 START TEST rpc_client 00:05:09.184 ************************************ 00:05:09.184 22:56:01 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:09.184 * Looking for test storage... 00:05:09.184 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:09.184 22:56:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:09.184 OK 00:05:09.184 22:56:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:09.184 00:05:09.184 real 0m0.102s 00:05:09.184 user 0m0.052s 00:05:09.184 sys 0m0.058s 00:05:09.184 22:56:01 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:09.184 22:56:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:09.184 ************************************ 00:05:09.184 END TEST rpc_client 00:05:09.184 ************************************ 00:05:09.184 22:56:01 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:09.184 22:56:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:09.184 22:56:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:09.184 22:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:09.184 ************************************ 00:05:09.184 START TEST json_config 00:05:09.184 ************************************ 00:05:09.184 22:56:01 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:09.443 22:56:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.443 22:56:01 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:09.443 22:56:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.443 22:56:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.443 22:56:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.443 22:56:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.443 22:56:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.444 22:56:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.444 22:56:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:09.444 22:56:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@47 -- # : 0 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.444 22:56:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:09.444 INFO: JSON configuration test init 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.444 22:56:01 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:09.444 22:56:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.444 22:56:01 json_config -- json_config/common.sh@10 -- # shift 00:05:09.444 22:56:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.444 22:56:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.444 22:56:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.444 22:56:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.444 22:56:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.444 22:56:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=740682 00:05:09.444 22:56:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.444 Waiting for target to run... 00:05:09.444 22:56:01 json_config -- json_config/common.sh@25 -- # waitforlisten 740682 /var/tmp/spdk_tgt.sock 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@830 -- # '[' -z 740682 ']' 00:05:09.444 22:56:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:09.444 22:56:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.444 [2024-06-07 22:56:01.614038] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:09.444 [2024-06-07 22:56:01.614086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740682 ] 00:05:09.444 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.703 [2024-06-07 22:56:01.875506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.703 [2024-06-07 22:56:01.944846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.270 22:56:02 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:10.270 22:56:02 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:10.270 22:56:02 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.270 00:05:10.270 22:56:02 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:10.270 22:56:02 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:10.270 22:56:02 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:10.270 22:56:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.270 22:56:02 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:10.270 22:56:02 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:10.270 22:56:02 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:10.270 22:56:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.270 22:56:02 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:10.270 22:56:02 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:10.270 22:56:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:13.598 22:56:05 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:13.598 22:56:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:13.599 22:56:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:13.599 22:56:05 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:13.599 22:56:05 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:13.599 22:56:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:20.158 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:20.158 22:56:11 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:20.159 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:20.159 Found net devices under 0000:da:00.0: mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:20.159 Found net devices under 0000:da:00.1: mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@58 -- # uname 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:20.159 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:20.159 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:05:20.159 altname enp218s0f0np0 00:05:20.159 altname ens818f0np0 00:05:20.159 inet 192.168.100.8/24 scope global mlx_0_0 00:05:20.159 valid_lft forever preferred_lft forever 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:20.159 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:20.159 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:05:20.159 altname enp218s0f1np1 00:05:20.159 altname ens818f1np1 00:05:20.159 inet 192.168.100.9/24 scope global mlx_0_1 00:05:20.159 valid_lft forever preferred_lft forever 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@422 -- # return 0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:20.159 192.168.100.9' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:20.159 192.168.100.9' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:20.159 192.168.100.9' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:20.159 22:56:11 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:20.160 22:56:11 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:20.160 22:56:11 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:20.160 22:56:12 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:20.160 22:56:12 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.160 22:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.160 MallocForNvmf0 00:05:20.160 22:56:12 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.160 22:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.160 MallocForNvmf1 00:05:20.160 22:56:12 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:20.160 22:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:20.419 [2024-06-07 22:56:12.500673] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:20.419 [2024-06-07 22:56:12.532427] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x176ea80/0x18bb8c0) succeed. 00:05:20.419 [2024-06-07 22:56:12.548949] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1770c70/0x181b7c0) succeed. 00:05:20.419 22:56:12 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.419 22:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.678 22:56:12 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.678 22:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.678 22:56:12 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.678 22:56:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.937 22:56:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:20.937 22:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:21.196 [2024-06-07 22:56:13.269864] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:21.196 22:56:13 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:21.196 22:56:13 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:21.196 22:56:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.196 22:56:13 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:21.196 22:56:13 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:21.196 22:56:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.196 22:56:13 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:21.196 22:56:13 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.196 22:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.453 MallocBdevForConfigChangeCheck 00:05:21.453 22:56:13 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:21.453 22:56:13 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:21.453 22:56:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.453 22:56:13 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:21.453 22:56:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.710 22:56:13 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:21.711 INFO: shutting down applications... 00:05:21.711 22:56:13 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:21.711 22:56:13 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:21.711 22:56:13 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:21.711 22:56:13 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:24.241 Calling clear_iscsi_subsystem 00:05:24.241 Calling clear_nvmf_subsystem 00:05:24.241 Calling clear_nbd_subsystem 00:05:24.241 Calling clear_ublk_subsystem 00:05:24.241 Calling clear_vhost_blk_subsystem 00:05:24.241 Calling clear_vhost_scsi_subsystem 00:05:24.241 Calling clear_bdev_subsystem 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@345 -- # break 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:24.241 22:56:16 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:24.241 22:56:16 json_config -- json_config/common.sh@31 -- # local app=target 00:05:24.241 22:56:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.242 22:56:16 json_config -- json_config/common.sh@35 -- # [[ -n 740682 ]] 00:05:24.242 22:56:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 740682 00:05:24.242 22:56:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.242 22:56:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.242 22:56:16 json_config -- json_config/common.sh@41 -- # kill -0 740682 00:05:24.242 22:56:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.808 22:56:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.808 22:56:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.808 22:56:16 json_config -- json_config/common.sh@41 -- # kill -0 740682 00:05:24.808 22:56:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.808 22:56:16 json_config -- json_config/common.sh@43 -- # break 00:05:24.808 22:56:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.808 22:56:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.808 SPDK target shutdown done 00:05:24.808 22:56:16 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:24.808 INFO: relaunching applications... 00:05:24.808 22:56:16 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.808 22:56:16 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.808 22:56:16 json_config -- json_config/common.sh@10 -- # shift 00:05:24.808 22:56:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.808 22:56:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.808 22:56:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.808 22:56:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.808 22:56:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.808 22:56:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=745710 00:05:24.808 22:56:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.808 Waiting for target to run... 00:05:24.808 22:56:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.808 22:56:16 json_config -- json_config/common.sh@25 -- # waitforlisten 745710 /var/tmp/spdk_tgt.sock 00:05:24.808 22:56:16 json_config -- common/autotest_common.sh@830 -- # '[' -z 745710 ']' 00:05:24.808 22:56:16 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.808 22:56:16 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:24.808 22:56:16 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.808 22:56:16 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:24.808 22:56:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.808 [2024-06-07 22:56:16.884981] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:24.808 [2024-06-07 22:56:16.885053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745710 ] 00:05:24.808 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.066 [2024-06-07 22:56:17.337705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.324 [2024-06-07 22:56:17.428998] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.610 [2024-06-07 22:56:20.459831] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a3c920/0x1a68ec0) succeed. 00:05:28.610 [2024-06-07 22:56:20.470932] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a3eb10/0x1ac8ea0) succeed. 00:05:28.610 [2024-06-07 22:56:20.519896] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:28.869 22:56:21 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:28.869 22:56:21 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:28.869 22:56:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.869 00:05:28.869 22:56:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:28.869 22:56:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:28.869 INFO: Checking if target configuration is the same... 00:05:28.869 22:56:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:28.869 22:56:21 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.869 22:56:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.869 + '[' 2 -ne 2 ']' 00:05:28.869 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.869 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:28.869 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:28.869 +++ basename /dev/fd/62 00:05:28.869 ++ mktemp /tmp/62.XXX 00:05:28.869 + tmp_file_1=/tmp/62.rc3 00:05:28.869 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.869 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.870 + tmp_file_2=/tmp/spdk_tgt_config.json.scV 00:05:28.870 + ret=0 00:05:28.870 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.128 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.128 + diff -u /tmp/62.rc3 /tmp/spdk_tgt_config.json.scV 00:05:29.128 + echo 'INFO: JSON config files are the same' 00:05:29.128 INFO: JSON config files are the same 00:05:29.128 + rm /tmp/62.rc3 /tmp/spdk_tgt_config.json.scV 00:05:29.128 + exit 0 00:05:29.128 22:56:21 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:29.128 22:56:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:29.128 INFO: changing configuration and checking if this can be detected... 00:05:29.128 22:56:21 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.128 22:56:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.388 22:56:21 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.388 22:56:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:29.388 22:56:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.388 + '[' 2 -ne 2 ']' 00:05:29.388 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.388 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:29.388 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:29.388 +++ basename /dev/fd/62 00:05:29.388 ++ mktemp /tmp/62.XXX 00:05:29.388 + tmp_file_1=/tmp/62.RaV 00:05:29.388 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.388 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.388 + tmp_file_2=/tmp/spdk_tgt_config.json.DTP 00:05:29.388 + ret=0 00:05:29.388 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.647 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.647 + diff -u /tmp/62.RaV /tmp/spdk_tgt_config.json.DTP 00:05:29.647 + ret=1 00:05:29.647 + echo '=== Start of file: /tmp/62.RaV ===' 00:05:29.647 + cat /tmp/62.RaV 00:05:29.647 + echo '=== End of file: /tmp/62.RaV ===' 00:05:29.647 + echo '' 00:05:29.647 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DTP ===' 00:05:29.647 + cat /tmp/spdk_tgt_config.json.DTP 00:05:29.647 + echo '=== End of file: /tmp/spdk_tgt_config.json.DTP ===' 00:05:29.647 + echo '' 00:05:29.647 + rm /tmp/62.RaV /tmp/spdk_tgt_config.json.DTP 00:05:29.647 + exit 1 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:29.647 INFO: configuration change detected. 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:29.647 22:56:21 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:29.647 22:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@317 -- # [[ -n 745710 ]] 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:29.647 22:56:21 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:29.647 22:56:21 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:29.647 22:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.648 22:56:21 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:29.648 22:56:21 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:29.648 22:56:21 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:29.648 22:56:21 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:29.648 22:56:21 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:29.648 22:56:21 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:29.648 22:56:21 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:29.648 22:56:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.907 22:56:21 json_config -- json_config/json_config.sh@323 -- # killprocess 745710 00:05:29.907 22:56:21 json_config -- common/autotest_common.sh@949 -- # '[' -z 745710 ']' 00:05:29.907 22:56:21 json_config -- common/autotest_common.sh@953 -- # kill -0 745710 00:05:29.907 22:56:21 json_config -- common/autotest_common.sh@954 -- # uname 00:05:29.907 22:56:21 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:29.907 22:56:21 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 745710 00:05:29.907 22:56:22 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:29.907 22:56:22 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:29.907 22:56:22 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 745710' 00:05:29.907 killing process with pid 745710 00:05:29.907 22:56:22 json_config -- common/autotest_common.sh@968 -- # kill 745710 00:05:29.907 22:56:22 json_config -- common/autotest_common.sh@973 -- # wait 745710 00:05:32.439 22:56:24 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.439 22:56:24 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:32.439 22:56:24 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:32.439 22:56:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 22:56:24 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:32.439 22:56:24 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:32.439 INFO: Success 00:05:32.439 22:56:24 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@117 -- # sync 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:32.439 22:56:24 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:05:32.439 00:05:32.439 real 0m22.730s 00:05:32.439 user 0m25.182s 00:05:32.439 sys 0m6.593s 00:05:32.439 22:56:24 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.439 22:56:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 ************************************ 00:05:32.439 END TEST json_config 00:05:32.439 ************************************ 00:05:32.439 22:56:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:32.439 22:56:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.439 22:56:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.439 22:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 ************************************ 00:05:32.439 START TEST json_config_extra_key 00:05:32.439 ************************************ 00:05:32.439 22:56:24 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:32.439 22:56:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.439 22:56:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.439 22:56:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.439 22:56:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.439 22:56:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.439 22:56:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.439 22:56:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:32.439 22:56:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.439 22:56:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:32.439 INFO: launching applications... 00:05:32.439 22:56:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=747206 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.439 Waiting for target to run... 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 747206 /var/tmp/spdk_tgt.sock 00:05:32.439 22:56:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:32.439 22:56:24 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 747206 ']' 00:05:32.439 22:56:24 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.440 22:56:24 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.440 22:56:24 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.440 22:56:24 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.440 22:56:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.440 [2024-06-07 22:56:24.384285] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:32.440 [2024-06-07 22:56:24.384330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747206 ] 00:05:32.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.440 [2024-06-07 22:56:24.660492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.697 [2024-06-07 22:56:24.728394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.956 22:56:25 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:32.956 22:56:25 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:32.956 00:05:32.956 22:56:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:32.956 INFO: shutting down applications... 00:05:32.956 22:56:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 747206 ]] 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 747206 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 747206 00:05:32.956 22:56:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 747206 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.522 22:56:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.522 SPDK target shutdown done 00:05:33.522 22:56:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:33.522 Success 00:05:33.522 00:05:33.522 real 0m1.432s 00:05:33.522 user 0m1.212s 00:05:33.522 sys 0m0.351s 00:05:33.522 22:56:25 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:33.522 22:56:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.522 ************************************ 00:05:33.522 END TEST json_config_extra_key 00:05:33.522 ************************************ 00:05:33.522 22:56:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.522 22:56:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:33.522 22:56:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.522 22:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.522 ************************************ 00:05:33.522 START TEST alias_rpc 00:05:33.522 ************************************ 00:05:33.522 22:56:25 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.780 * Looking for test storage... 00:05:33.780 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:33.780 22:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.780 22:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=747486 00:05:33.780 22:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 747486 00:05:33.780 22:56:25 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 747486 ']' 00:05:33.780 22:56:25 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.780 22:56:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.781 22:56:25 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:33.781 22:56:25 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.781 22:56:25 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:33.781 22:56:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.781 [2024-06-07 22:56:25.871633] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:33.781 [2024-06-07 22:56:25.871678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747486 ] 00:05:33.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.781 [2024-06-07 22:56:25.932504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.781 [2024-06-07 22:56:26.011787] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:34.718 22:56:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:34.718 22:56:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 747486 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 747486 ']' 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 747486 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 747486 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 747486' 00:05:34.718 killing process with pid 747486 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@968 -- # kill 747486 00:05:34.718 22:56:26 alias_rpc -- common/autotest_common.sh@973 -- # wait 747486 00:05:34.975 00:05:34.975 real 0m1.469s 00:05:34.975 user 0m1.623s 00:05:34.975 sys 0m0.372s 00:05:34.975 22:56:27 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.975 22:56:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.975 ************************************ 00:05:34.975 END TEST alias_rpc 00:05:34.975 ************************************ 00:05:34.975 22:56:27 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:34.975 22:56:27 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.975 22:56:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:34.975 22:56:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:34.975 22:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.234 ************************************ 00:05:35.234 START TEST spdkcli_tcp 00:05:35.234 ************************************ 00:05:35.234 22:56:27 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:35.234 * Looking for test storage... 00:05:35.234 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:35.234 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:35.234 22:56:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:35.234 22:56:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=747773 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 747773 00:05:35.235 22:56:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 747773 ']' 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:35.235 22:56:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.235 [2024-06-07 22:56:27.409290] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:35.235 [2024-06-07 22:56:27.409337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747773 ] 00:05:35.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.235 [2024-06-07 22:56:27.470285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.492 [2024-06-07 22:56:27.543568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.492 [2024-06-07 22:56:27.543571] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.058 22:56:28 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:36.058 22:56:28 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:36.058 22:56:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=747965 00:05:36.058 22:56:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:36.058 22:56:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:36.316 [ 00:05:36.316 "bdev_malloc_delete", 00:05:36.316 "bdev_malloc_create", 00:05:36.316 "bdev_null_resize", 00:05:36.316 "bdev_null_delete", 00:05:36.316 "bdev_null_create", 00:05:36.316 "bdev_nvme_cuse_unregister", 00:05:36.316 "bdev_nvme_cuse_register", 00:05:36.316 "bdev_opal_new_user", 00:05:36.316 "bdev_opal_set_lock_state", 00:05:36.316 "bdev_opal_delete", 00:05:36.316 "bdev_opal_get_info", 00:05:36.316 "bdev_opal_create", 00:05:36.316 "bdev_nvme_opal_revert", 00:05:36.316 "bdev_nvme_opal_init", 00:05:36.316 "bdev_nvme_send_cmd", 00:05:36.316 "bdev_nvme_get_path_iostat", 00:05:36.316 "bdev_nvme_get_mdns_discovery_info", 00:05:36.316 "bdev_nvme_stop_mdns_discovery", 00:05:36.316 "bdev_nvme_start_mdns_discovery", 00:05:36.316 "bdev_nvme_set_multipath_policy", 00:05:36.316 "bdev_nvme_set_preferred_path", 00:05:36.316 "bdev_nvme_get_io_paths", 00:05:36.316 "bdev_nvme_remove_error_injection", 00:05:36.316 "bdev_nvme_add_error_injection", 00:05:36.316 "bdev_nvme_get_discovery_info", 00:05:36.316 "bdev_nvme_stop_discovery", 00:05:36.316 "bdev_nvme_start_discovery", 00:05:36.316 "bdev_nvme_get_controller_health_info", 00:05:36.316 "bdev_nvme_disable_controller", 00:05:36.316 "bdev_nvme_enable_controller", 00:05:36.316 "bdev_nvme_reset_controller", 00:05:36.316 "bdev_nvme_get_transport_statistics", 00:05:36.316 "bdev_nvme_apply_firmware", 00:05:36.316 "bdev_nvme_detach_controller", 00:05:36.316 "bdev_nvme_get_controllers", 00:05:36.316 "bdev_nvme_attach_controller", 00:05:36.316 "bdev_nvme_set_hotplug", 00:05:36.316 "bdev_nvme_set_options", 00:05:36.316 "bdev_passthru_delete", 00:05:36.316 "bdev_passthru_create", 00:05:36.316 "bdev_lvol_set_parent_bdev", 00:05:36.316 "bdev_lvol_set_parent", 00:05:36.316 "bdev_lvol_check_shallow_copy", 00:05:36.316 "bdev_lvol_start_shallow_copy", 00:05:36.316 "bdev_lvol_grow_lvstore", 00:05:36.316 "bdev_lvol_get_lvols", 00:05:36.316 "bdev_lvol_get_lvstores", 00:05:36.316 "bdev_lvol_delete", 00:05:36.316 "bdev_lvol_set_read_only", 00:05:36.316 "bdev_lvol_resize", 00:05:36.316 "bdev_lvol_decouple_parent", 00:05:36.316 "bdev_lvol_inflate", 00:05:36.316 "bdev_lvol_rename", 00:05:36.316 "bdev_lvol_clone_bdev", 00:05:36.316 "bdev_lvol_clone", 00:05:36.316 "bdev_lvol_snapshot", 00:05:36.316 "bdev_lvol_create", 00:05:36.316 "bdev_lvol_delete_lvstore", 00:05:36.316 "bdev_lvol_rename_lvstore", 00:05:36.316 "bdev_lvol_create_lvstore", 00:05:36.316 "bdev_raid_set_options", 00:05:36.316 "bdev_raid_remove_base_bdev", 00:05:36.316 "bdev_raid_add_base_bdev", 00:05:36.316 "bdev_raid_delete", 00:05:36.316 "bdev_raid_create", 00:05:36.316 "bdev_raid_get_bdevs", 00:05:36.316 "bdev_error_inject_error", 00:05:36.316 "bdev_error_delete", 00:05:36.316 "bdev_error_create", 00:05:36.316 "bdev_split_delete", 00:05:36.316 "bdev_split_create", 00:05:36.316 "bdev_delay_delete", 00:05:36.316 "bdev_delay_create", 00:05:36.316 "bdev_delay_update_latency", 00:05:36.316 "bdev_zone_block_delete", 00:05:36.316 "bdev_zone_block_create", 00:05:36.316 "blobfs_create", 00:05:36.316 "blobfs_detect", 00:05:36.316 "blobfs_set_cache_size", 00:05:36.316 "bdev_aio_delete", 00:05:36.316 "bdev_aio_rescan", 00:05:36.316 "bdev_aio_create", 00:05:36.316 "bdev_ftl_set_property", 00:05:36.316 "bdev_ftl_get_properties", 00:05:36.316 "bdev_ftl_get_stats", 00:05:36.316 "bdev_ftl_unmap", 00:05:36.316 "bdev_ftl_unload", 00:05:36.316 "bdev_ftl_delete", 00:05:36.316 "bdev_ftl_load", 00:05:36.316 "bdev_ftl_create", 00:05:36.316 "bdev_virtio_attach_controller", 00:05:36.316 "bdev_virtio_scsi_get_devices", 00:05:36.316 "bdev_virtio_detach_controller", 00:05:36.316 "bdev_virtio_blk_set_hotplug", 00:05:36.316 "bdev_iscsi_delete", 00:05:36.316 "bdev_iscsi_create", 00:05:36.316 "bdev_iscsi_set_options", 00:05:36.316 "accel_error_inject_error", 00:05:36.316 "ioat_scan_accel_module", 00:05:36.316 "dsa_scan_accel_module", 00:05:36.316 "iaa_scan_accel_module", 00:05:36.316 "keyring_file_remove_key", 00:05:36.316 "keyring_file_add_key", 00:05:36.316 "keyring_linux_set_options", 00:05:36.316 "iscsi_get_histogram", 00:05:36.316 "iscsi_enable_histogram", 00:05:36.316 "iscsi_set_options", 00:05:36.316 "iscsi_get_auth_groups", 00:05:36.316 "iscsi_auth_group_remove_secret", 00:05:36.316 "iscsi_auth_group_add_secret", 00:05:36.316 "iscsi_delete_auth_group", 00:05:36.316 "iscsi_create_auth_group", 00:05:36.316 "iscsi_set_discovery_auth", 00:05:36.316 "iscsi_get_options", 00:05:36.316 "iscsi_target_node_request_logout", 00:05:36.316 "iscsi_target_node_set_redirect", 00:05:36.316 "iscsi_target_node_set_auth", 00:05:36.316 "iscsi_target_node_add_lun", 00:05:36.316 "iscsi_get_stats", 00:05:36.316 "iscsi_get_connections", 00:05:36.316 "iscsi_portal_group_set_auth", 00:05:36.316 "iscsi_start_portal_group", 00:05:36.316 "iscsi_delete_portal_group", 00:05:36.316 "iscsi_create_portal_group", 00:05:36.316 "iscsi_get_portal_groups", 00:05:36.316 "iscsi_delete_target_node", 00:05:36.316 "iscsi_target_node_remove_pg_ig_maps", 00:05:36.316 "iscsi_target_node_add_pg_ig_maps", 00:05:36.316 "iscsi_create_target_node", 00:05:36.316 "iscsi_get_target_nodes", 00:05:36.316 "iscsi_delete_initiator_group", 00:05:36.316 "iscsi_initiator_group_remove_initiators", 00:05:36.316 "iscsi_initiator_group_add_initiators", 00:05:36.316 "iscsi_create_initiator_group", 00:05:36.316 "iscsi_get_initiator_groups", 00:05:36.316 "nvmf_set_crdt", 00:05:36.316 "nvmf_set_config", 00:05:36.316 "nvmf_set_max_subsystems", 00:05:36.316 "nvmf_stop_mdns_prr", 00:05:36.316 "nvmf_publish_mdns_prr", 00:05:36.316 "nvmf_subsystem_get_listeners", 00:05:36.316 "nvmf_subsystem_get_qpairs", 00:05:36.316 "nvmf_subsystem_get_controllers", 00:05:36.316 "nvmf_get_stats", 00:05:36.316 "nvmf_get_transports", 00:05:36.316 "nvmf_create_transport", 00:05:36.316 "nvmf_get_targets", 00:05:36.316 "nvmf_delete_target", 00:05:36.316 "nvmf_create_target", 00:05:36.316 "nvmf_subsystem_allow_any_host", 00:05:36.316 "nvmf_subsystem_remove_host", 00:05:36.316 "nvmf_subsystem_add_host", 00:05:36.316 "nvmf_ns_remove_host", 00:05:36.316 "nvmf_ns_add_host", 00:05:36.316 "nvmf_subsystem_remove_ns", 00:05:36.316 "nvmf_subsystem_add_ns", 00:05:36.316 "nvmf_subsystem_listener_set_ana_state", 00:05:36.316 "nvmf_discovery_get_referrals", 00:05:36.316 "nvmf_discovery_remove_referral", 00:05:36.316 "nvmf_discovery_add_referral", 00:05:36.316 "nvmf_subsystem_remove_listener", 00:05:36.316 "nvmf_subsystem_add_listener", 00:05:36.316 "nvmf_delete_subsystem", 00:05:36.316 "nvmf_create_subsystem", 00:05:36.316 "nvmf_get_subsystems", 00:05:36.316 "env_dpdk_get_mem_stats", 00:05:36.316 "nbd_get_disks", 00:05:36.316 "nbd_stop_disk", 00:05:36.316 "nbd_start_disk", 00:05:36.316 "ublk_recover_disk", 00:05:36.316 "ublk_get_disks", 00:05:36.316 "ublk_stop_disk", 00:05:36.316 "ublk_start_disk", 00:05:36.316 "ublk_destroy_target", 00:05:36.316 "ublk_create_target", 00:05:36.316 "virtio_blk_create_transport", 00:05:36.316 "virtio_blk_get_transports", 00:05:36.316 "vhost_controller_set_coalescing", 00:05:36.316 "vhost_get_controllers", 00:05:36.316 "vhost_delete_controller", 00:05:36.316 "vhost_create_blk_controller", 00:05:36.316 "vhost_scsi_controller_remove_target", 00:05:36.316 "vhost_scsi_controller_add_target", 00:05:36.316 "vhost_start_scsi_controller", 00:05:36.316 "vhost_create_scsi_controller", 00:05:36.316 "thread_set_cpumask", 00:05:36.316 "framework_get_scheduler", 00:05:36.316 "framework_set_scheduler", 00:05:36.316 "framework_get_reactors", 00:05:36.316 "thread_get_io_channels", 00:05:36.316 "thread_get_pollers", 00:05:36.316 "thread_get_stats", 00:05:36.316 "framework_monitor_context_switch", 00:05:36.316 "spdk_kill_instance", 00:05:36.316 "log_enable_timestamps", 00:05:36.316 "log_get_flags", 00:05:36.316 "log_clear_flag", 00:05:36.316 "log_set_flag", 00:05:36.316 "log_get_level", 00:05:36.316 "log_set_level", 00:05:36.316 "log_get_print_level", 00:05:36.316 "log_set_print_level", 00:05:36.316 "framework_enable_cpumask_locks", 00:05:36.316 "framework_disable_cpumask_locks", 00:05:36.316 "framework_wait_init", 00:05:36.316 "framework_start_init", 00:05:36.316 "scsi_get_devices", 00:05:36.316 "bdev_get_histogram", 00:05:36.316 "bdev_enable_histogram", 00:05:36.316 "bdev_set_qos_limit", 00:05:36.316 "bdev_set_qd_sampling_period", 00:05:36.316 "bdev_get_bdevs", 00:05:36.316 "bdev_reset_iostat", 00:05:36.316 "bdev_get_iostat", 00:05:36.316 "bdev_examine", 00:05:36.316 "bdev_wait_for_examine", 00:05:36.316 "bdev_set_options", 00:05:36.316 "notify_get_notifications", 00:05:36.316 "notify_get_types", 00:05:36.316 "accel_get_stats", 00:05:36.316 "accel_set_options", 00:05:36.316 "accel_set_driver", 00:05:36.316 "accel_crypto_key_destroy", 00:05:36.316 "accel_crypto_keys_get", 00:05:36.316 "accel_crypto_key_create", 00:05:36.316 "accel_assign_opc", 00:05:36.316 "accel_get_module_info", 00:05:36.316 "accel_get_opc_assignments", 00:05:36.316 "vmd_rescan", 00:05:36.316 "vmd_remove_device", 00:05:36.316 "vmd_enable", 00:05:36.316 "sock_get_default_impl", 00:05:36.316 "sock_set_default_impl", 00:05:36.316 "sock_impl_set_options", 00:05:36.316 "sock_impl_get_options", 00:05:36.316 "iobuf_get_stats", 00:05:36.316 "iobuf_set_options", 00:05:36.316 "framework_get_pci_devices", 00:05:36.316 "framework_get_config", 00:05:36.316 "framework_get_subsystems", 00:05:36.316 "trace_get_info", 00:05:36.316 "trace_get_tpoint_group_mask", 00:05:36.316 "trace_disable_tpoint_group", 00:05:36.316 "trace_enable_tpoint_group", 00:05:36.316 "trace_clear_tpoint_mask", 00:05:36.316 "trace_set_tpoint_mask", 00:05:36.316 "keyring_get_keys", 00:05:36.316 "spdk_get_version", 00:05:36.316 "rpc_get_methods" 00:05:36.316 ] 00:05:36.316 22:56:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.316 22:56:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:36.316 22:56:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 747773 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 747773 ']' 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 747773 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 747773 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 747773' 00:05:36.316 killing process with pid 747773 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 747773 00:05:36.316 22:56:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 747773 00:05:36.576 00:05:36.576 real 0m1.497s 00:05:36.576 user 0m2.782s 00:05:36.576 sys 0m0.427s 00:05:36.576 22:56:28 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.577 22:56:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.577 ************************************ 00:05:36.577 END TEST spdkcli_tcp 00:05:36.577 ************************************ 00:05:36.577 22:56:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.577 22:56:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.577 22:56:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.577 22:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.577 ************************************ 00:05:36.577 START TEST dpdk_mem_utility 00:05:36.577 ************************************ 00:05:36.577 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.873 * Looking for test storage... 00:05:36.873 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:36.873 22:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.873 22:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=748074 00:05:36.873 22:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.873 22:56:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 748074 00:05:36.873 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 748074 ']' 00:05:36.873 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.873 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:36.873 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.873 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:36.873 22:56:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.873 [2024-06-07 22:56:28.954761] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:36.873 [2024-06-07 22:56:28.954804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748074 ] 00:05:36.873 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.873 [2024-06-07 22:56:29.014977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.873 [2024-06-07 22:56:29.094328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.807 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:37.807 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:37.807 22:56:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.807 22:56:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.807 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:37.807 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.807 { 00:05:37.807 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.807 } 00:05:37.807 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:37.807 22:56:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.807 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:37.807 1 heaps totaling size 814.000000 MiB 00:05:37.807 size: 814.000000 MiB heap id: 0 00:05:37.807 end heaps---------- 00:05:37.807 8 mempools totaling size 598.116089 MiB 00:05:37.807 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.807 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.807 size: 84.521057 MiB name: bdev_io_748074 00:05:37.807 size: 51.011292 MiB name: evtpool_748074 00:05:37.807 size: 50.003479 MiB name: msgpool_748074 00:05:37.807 size: 21.763794 MiB name: PDU_Pool 00:05:37.807 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.807 size: 0.026123 MiB name: Session_Pool 00:05:37.807 end mempools------- 00:05:37.807 6 memzones totaling size 4.142822 MiB 00:05:37.807 size: 1.000366 MiB name: RG_ring_0_748074 00:05:37.807 size: 1.000366 MiB name: RG_ring_1_748074 00:05:37.807 size: 1.000366 MiB name: RG_ring_4_748074 00:05:37.807 size: 1.000366 MiB name: RG_ring_5_748074 00:05:37.807 size: 0.125366 MiB name: RG_ring_2_748074 00:05:37.807 size: 0.015991 MiB name: RG_ring_3_748074 00:05:37.807 end memzones------- 00:05:37.807 22:56:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.807 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:37.807 list of free elements. size: 12.519348 MiB 00:05:37.807 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:37.807 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:37.807 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:37.807 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:37.807 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:37.807 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:37.807 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:37.807 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:37.807 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:37.807 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:37.807 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:37.807 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:37.807 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:37.807 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:37.807 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:37.807 list of standard malloc elements. size: 199.218079 MiB 00:05:37.807 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:37.807 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:37.807 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.807 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:37.807 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.807 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.807 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:37.807 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.807 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:37.807 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.807 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:37.807 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:37.807 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:37.807 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:37.807 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:37.807 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:37.807 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:37.808 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:37.808 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:37.808 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:37.808 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:37.808 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:37.808 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:37.808 list of memzone associated elements. size: 602.262573 MiB 00:05:37.808 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:37.808 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.808 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:37.808 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.808 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:37.808 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_748074_0 00:05:37.808 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:37.808 associated memzone info: size: 48.002930 MiB name: MP_evtpool_748074_0 00:05:37.808 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:37.808 associated memzone info: size: 48.002930 MiB name: MP_msgpool_748074_0 00:05:37.808 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:37.808 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.808 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:37.808 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.808 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:37.808 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_748074 00:05:37.808 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:37.808 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_748074 00:05:37.808 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.808 associated memzone info: size: 1.007996 MiB name: MP_evtpool_748074 00:05:37.808 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:37.808 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.808 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:37.808 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.808 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:37.808 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.808 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:37.808 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.808 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:37.808 associated memzone info: size: 1.000366 MiB name: RG_ring_0_748074 00:05:37.808 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:37.808 associated memzone info: size: 1.000366 MiB name: RG_ring_1_748074 00:05:37.808 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:37.808 associated memzone info: size: 1.000366 MiB name: RG_ring_4_748074 00:05:37.808 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:37.808 associated memzone info: size: 1.000366 MiB name: RG_ring_5_748074 00:05:37.808 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:37.808 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_748074 00:05:37.808 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:37.808 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.808 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:37.808 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.808 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:37.808 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.808 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:37.808 associated memzone info: size: 0.125366 MiB name: RG_ring_2_748074 00:05:37.808 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:37.808 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.808 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:37.808 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.808 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:37.808 associated memzone info: size: 0.015991 MiB name: RG_ring_3_748074 00:05:37.808 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:37.808 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.808 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:37.808 associated memzone info: size: 0.000183 MiB name: MP_msgpool_748074 00:05:37.808 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:37.808 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_748074 00:05:37.808 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:37.808 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.808 22:56:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.808 22:56:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 748074 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 748074 ']' 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 748074 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 748074 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 748074' 00:05:37.808 killing process with pid 748074 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 748074 00:05:37.808 22:56:29 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 748074 00:05:38.067 00:05:38.067 real 0m1.352s 00:05:38.067 user 0m1.420s 00:05:38.067 sys 0m0.365s 00:05:38.067 22:56:30 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.067 22:56:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.067 ************************************ 00:05:38.067 END TEST dpdk_mem_utility 00:05:38.067 ************************************ 00:05:38.067 22:56:30 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:38.067 22:56:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:38.067 22:56:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:38.067 22:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:38.067 ************************************ 00:05:38.067 START TEST event 00:05:38.067 ************************************ 00:05:38.067 22:56:30 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:38.067 * Looking for test storage... 00:05:38.067 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:38.067 22:56:30 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:38.067 22:56:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:38.067 22:56:30 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.067 22:56:30 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:38.067 22:56:30 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:38.067 22:56:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.327 ************************************ 00:05:38.327 START TEST event_perf 00:05:38.327 ************************************ 00:05:38.327 22:56:30 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.327 Running I/O for 1 seconds...[2024-06-07 22:56:30.382664] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:38.327 [2024-06-07 22:56:30.382731] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748371 ] 00:05:38.327 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.327 [2024-06-07 22:56:30.444271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.327 [2024-06-07 22:56:30.519131] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.327 [2024-06-07 22:56:30.519229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.327 [2024-06-07 22:56:30.519317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.327 [2024-06-07 22:56:30.519318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.704 Running I/O for 1 seconds... 00:05:39.704 lcore 0: 210419 00:05:39.704 lcore 1: 210415 00:05:39.704 lcore 2: 210417 00:05:39.704 lcore 3: 210417 00:05:39.704 done. 00:05:39.704 00:05:39.704 real 0m1.225s 00:05:39.704 user 0m4.145s 00:05:39.704 sys 0m0.077s 00:05:39.704 22:56:31 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.704 22:56:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.704 ************************************ 00:05:39.704 END TEST event_perf 00:05:39.704 ************************************ 00:05:39.704 22:56:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.704 22:56:31 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:39.704 22:56:31 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.704 22:56:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.704 ************************************ 00:05:39.704 START TEST event_reactor 00:05:39.704 ************************************ 00:05:39.704 22:56:31 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.704 [2024-06-07 22:56:31.665806] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:39.704 [2024-06-07 22:56:31.665871] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748621 ] 00:05:39.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.704 [2024-06-07 22:56:31.727202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.704 [2024-06-07 22:56:31.798746] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.640 test_start 00:05:40.640 oneshot 00:05:40.640 tick 100 00:05:40.640 tick 100 00:05:40.640 tick 250 00:05:40.640 tick 100 00:05:40.640 tick 100 00:05:40.640 tick 250 00:05:40.640 tick 100 00:05:40.640 tick 500 00:05:40.640 tick 100 00:05:40.640 tick 100 00:05:40.640 tick 250 00:05:40.640 tick 100 00:05:40.640 tick 100 00:05:40.640 test_end 00:05:40.640 00:05:40.640 real 0m1.219s 00:05:40.640 user 0m1.135s 00:05:40.640 sys 0m0.079s 00:05:40.640 22:56:32 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.640 22:56:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 ************************************ 00:05:40.640 END TEST event_reactor 00:05:40.640 ************************************ 00:05:40.640 22:56:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.640 22:56:32 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:40.640 22:56:32 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.640 22:56:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.899 ************************************ 00:05:40.899 START TEST event_reactor_perf 00:05:40.899 ************************************ 00:05:40.899 22:56:32 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.899 [2024-06-07 22:56:32.945835] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:40.899 [2024-06-07 22:56:32.945902] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748869 ] 00:05:40.900 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.900 [2024-06-07 22:56:33.009496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.900 [2024-06-07 22:56:33.080112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.275 test_start 00:05:42.275 test_end 00:05:42.275 Performance: 522328 events per second 00:05:42.275 00:05:42.275 real 0m1.222s 00:05:42.275 user 0m1.139s 00:05:42.275 sys 0m0.078s 00:05:42.275 22:56:34 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.275 22:56:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.275 ************************************ 00:05:42.275 END TEST event_reactor_perf 00:05:42.275 ************************************ 00:05:42.275 22:56:34 event -- event/event.sh@49 -- # uname -s 00:05:42.275 22:56:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.275 22:56:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.275 22:56:34 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:42.275 22:56:34 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.275 22:56:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.275 ************************************ 00:05:42.275 START TEST event_scheduler 00:05:42.275 ************************************ 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:42.275 * Looking for test storage... 00:05:42.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:42.275 22:56:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.275 22:56:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=749149 00:05:42.275 22:56:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.275 22:56:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.275 22:56:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 749149 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 749149 ']' 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:42.275 22:56:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.275 [2024-06-07 22:56:34.342920] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:42.275 [2024-06-07 22:56:34.342965] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749149 ] 00:05:42.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.275 [2024-06-07 22:56:34.397665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.275 [2024-06-07 22:56:34.471796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.275 [2024-06-07 22:56:34.471881] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.275 [2024-06-07 22:56:34.471966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.275 [2024-06-07 22:56:34.471968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:43.211 22:56:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 POWER: Env isn't set yet! 00:05:43.211 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:43.211 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.211 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.211 POWER: Attempting to initialise PSTAT power management... 00:05:43.211 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:43.211 POWER: Initialized successfully for lcore 0 power management 00:05:43.211 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:43.211 POWER: Initialized successfully for lcore 1 power management 00:05:43.211 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:43.211 POWER: Initialized successfully for lcore 2 power management 00:05:43.211 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:43.211 POWER: Initialized successfully for lcore 3 power management 00:05:43.211 [2024-06-07 22:56:35.198135] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:43.211 [2024-06-07 22:56:35.198147] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:43.211 [2024-06-07 22:56:35.198154] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 [2024-06-07 22:56:35.265633] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 ************************************ 00:05:43.211 START TEST scheduler_create_thread 00:05:43.211 ************************************ 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 2 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 3 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 4 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 5 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 6 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.211 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 7 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 8 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 9 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.212 10 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.212 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.780 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:43.780 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:43.780 22:56:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:43.780 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.780 22:56:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.715 22:56:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.715 22:56:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.715 22:56:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.715 22:56:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.651 22:56:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.651 22:56:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.651 22:56:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.651 22:56:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.651 22:56:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.587 22:56:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.587 00:05:46.587 real 0m3.229s 00:05:46.587 user 0m0.021s 00:05:46.587 sys 0m0.007s 00:05:46.587 22:56:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.587 22:56:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.587 ************************************ 00:05:46.587 END TEST scheduler_create_thread 00:05:46.587 ************************************ 00:05:46.587 22:56:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.587 22:56:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 749149 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 749149 ']' 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 749149 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 749149 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 749149' 00:05:46.587 killing process with pid 749149 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 749149 00:05:46.587 22:56:38 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 749149 00:05:46.846 [2024-06-07 22:56:38.910688] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:46.846 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:46.846 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:46.846 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:46.846 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:46.846 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:46.846 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:46.846 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:46.846 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:47.105 00:05:47.105 real 0m4.951s 00:05:47.105 user 0m10.192s 00:05:47.105 sys 0m0.360s 00:05:47.105 22:56:39 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.105 22:56:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.105 ************************************ 00:05:47.105 END TEST event_scheduler 00:05:47.105 ************************************ 00:05:47.105 22:56:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.105 22:56:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.105 22:56:39 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.105 22:56:39 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.105 22:56:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.106 ************************************ 00:05:47.106 START TEST app_repeat 00:05:47.106 ************************************ 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=750107 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 750107' 00:05:47.106 Process app_repeat pid: 750107 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.106 spdk_app_start Round 0 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 750107 /var/tmp/spdk-nbd.sock 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 750107 ']' 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.106 22:56:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:47.106 22:56:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.106 [2024-06-07 22:56:39.263124] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:05:47.106 [2024-06-07 22:56:39.263185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750107 ] 00:05:47.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.106 [2024-06-07 22:56:39.324612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.365 [2024-06-07 22:56:39.406404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.365 [2024-06-07 22:56:39.406407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.932 22:56:40 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.932 22:56:40 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:47.932 22:56:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.191 Malloc0 00:05:48.191 22:56:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.191 Malloc1 00:05:48.191 22:56:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.192 22:56:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.451 /dev/nbd0 00:05:48.451 22:56:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.451 22:56:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.451 1+0 records in 00:05:48.451 1+0 records out 00:05:48.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017578 s, 23.3 MB/s 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:48.451 22:56:40 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:48.451 22:56:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.451 22:56:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.451 22:56:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.710 /dev/nbd1 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.710 1+0 records in 00:05:48.710 1+0 records out 00:05:48.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183373 s, 22.3 MB/s 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:48.710 22:56:40 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.710 22:56:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.969 { 00:05:48.969 "nbd_device": "/dev/nbd0", 00:05:48.969 "bdev_name": "Malloc0" 00:05:48.969 }, 00:05:48.969 { 00:05:48.969 "nbd_device": "/dev/nbd1", 00:05:48.969 "bdev_name": "Malloc1" 00:05:48.969 } 00:05:48.969 ]' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.969 { 00:05:48.969 "nbd_device": "/dev/nbd0", 00:05:48.969 "bdev_name": "Malloc0" 00:05:48.969 }, 00:05:48.969 { 00:05:48.969 "nbd_device": "/dev/nbd1", 00:05:48.969 "bdev_name": "Malloc1" 00:05:48.969 } 00:05:48.969 ]' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.969 /dev/nbd1' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.969 /dev/nbd1' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.969 256+0 records in 00:05:48.969 256+0 records out 00:05:48.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103592 s, 101 MB/s 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.969 256+0 records in 00:05:48.969 256+0 records out 00:05:48.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013419 s, 78.1 MB/s 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.969 256+0 records in 00:05:48.969 256+0 records out 00:05:48.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139668 s, 75.1 MB/s 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.969 22:56:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.970 22:56:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.970 22:56:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.970 22:56:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.970 22:56:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.970 22:56:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.970 22:56:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.228 22:56:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.487 22:56:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.488 22:56:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.488 22:56:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.747 22:56:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.005 [2024-06-07 22:56:42.137721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.005 [2024-06-07 22:56:42.204298] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.005 [2024-06-07 22:56:42.204300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.005 [2024-06-07 22:56:42.244977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.005 [2024-06-07 22:56:42.245018] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.294 22:56:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.294 22:56:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.294 spdk_app_start Round 1 00:05:53.294 22:56:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 750107 /var/tmp/spdk-nbd.sock 00:05:53.294 22:56:44 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 750107 ']' 00:05:53.294 22:56:44 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.294 22:56:44 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:53.294 22:56:44 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.294 22:56:44 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:53.294 22:56:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.294 22:56:45 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:53.294 22:56:45 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:53.294 22:56:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.294 Malloc0 00:05:53.294 22:56:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.294 Malloc1 00:05:53.294 22:56:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.294 22:56:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.553 /dev/nbd0 00:05:53.553 22:56:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.553 22:56:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.553 1+0 records in 00:05:53.553 1+0 records out 00:05:53.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169507 s, 24.2 MB/s 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:53.553 22:56:45 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:53.553 22:56:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.553 22:56:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.553 22:56:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.812 /dev/nbd1 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.812 1+0 records in 00:05:53.812 1+0 records out 00:05:53.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190004 s, 21.6 MB/s 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:53.812 22:56:45 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.812 22:56:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.812 22:56:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.812 { 00:05:53.812 "nbd_device": "/dev/nbd0", 00:05:53.812 "bdev_name": "Malloc0" 00:05:53.812 }, 00:05:53.812 { 00:05:53.812 "nbd_device": "/dev/nbd1", 00:05:53.812 "bdev_name": "Malloc1" 00:05:53.812 } 00:05:53.812 ]' 00:05:53.812 22:56:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.812 { 00:05:53.812 "nbd_device": "/dev/nbd0", 00:05:53.812 "bdev_name": "Malloc0" 00:05:53.812 }, 00:05:53.812 { 00:05:53.812 "nbd_device": "/dev/nbd1", 00:05:53.812 "bdev_name": "Malloc1" 00:05:53.812 } 00:05:53.812 ]' 00:05:53.812 22:56:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.071 /dev/nbd1' 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.071 /dev/nbd1' 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.071 256+0 records in 00:05:54.071 256+0 records out 00:05:54.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106218 s, 98.7 MB/s 00:05:54.071 22:56:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.072 256+0 records in 00:05:54.072 256+0 records out 00:05:54.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126225 s, 83.1 MB/s 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.072 256+0 records in 00:05:54.072 256+0 records out 00:05:54.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137138 s, 76.5 MB/s 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.072 22:56:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.330 22:56:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.589 22:56:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.589 22:56:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.848 22:56:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.107 [2024-06-07 22:56:47.152926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.107 [2024-06-07 22:56:47.223141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.107 [2024-06-07 22:56:47.223144] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.107 [2024-06-07 22:56:47.264763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.107 [2024-06-07 22:56:47.264798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.715 22:56:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.715 22:56:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.715 spdk_app_start Round 2 00:05:57.715 22:56:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 750107 /var/tmp/spdk-nbd.sock 00:05:57.715 22:56:49 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 750107 ']' 00:05:57.715 22:56:49 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.715 22:56:49 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:57.715 22:56:49 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.715 22:56:49 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:57.715 22:56:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.974 22:56:50 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:57.974 22:56:50 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:57.974 22:56:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.233 Malloc0 00:05:58.233 22:56:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.233 Malloc1 00:05:58.233 22:56:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.491 22:56:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.491 22:56:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.492 /dev/nbd0 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.492 1+0 records in 00:05:58.492 1+0 records out 00:05:58.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181712 s, 22.5 MB/s 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:58.492 22:56:50 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.492 22:56:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.751 /dev/nbd1 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.751 1+0 records in 00:05:58.751 1+0 records out 00:05:58.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242916 s, 16.9 MB/s 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:58.751 22:56:50 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.751 22:56:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.010 { 00:05:59.010 "nbd_device": "/dev/nbd0", 00:05:59.010 "bdev_name": "Malloc0" 00:05:59.010 }, 00:05:59.010 { 00:05:59.010 "nbd_device": "/dev/nbd1", 00:05:59.010 "bdev_name": "Malloc1" 00:05:59.010 } 00:05:59.010 ]' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.010 { 00:05:59.010 "nbd_device": "/dev/nbd0", 00:05:59.010 "bdev_name": "Malloc0" 00:05:59.010 }, 00:05:59.010 { 00:05:59.010 "nbd_device": "/dev/nbd1", 00:05:59.010 "bdev_name": "Malloc1" 00:05:59.010 } 00:05:59.010 ]' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.010 /dev/nbd1' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.010 /dev/nbd1' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.010 256+0 records in 00:05:59.010 256+0 records out 00:05:59.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103504 s, 101 MB/s 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.010 256+0 records in 00:05:59.010 256+0 records out 00:05:59.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140221 s, 74.8 MB/s 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.010 256+0 records in 00:05:59.010 256+0 records out 00:05:59.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149218 s, 70.3 MB/s 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.010 22:56:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.269 22:56:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.528 22:56:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.528 22:56:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.787 22:56:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.047 [2024-06-07 22:56:52.159457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.047 [2024-06-07 22:56:52.225817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.047 [2024-06-07 22:56:52.225820] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.047 [2024-06-07 22:56:52.265853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.047 [2024-06-07 22:56:52.265891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.334 22:56:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 750107 /var/tmp/spdk-nbd.sock 00:06:03.334 22:56:54 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 750107 ']' 00:06:03.334 22:56:54 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.334 22:56:54 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.334 22:56:54 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.334 22:56:54 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.334 22:56:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:03.334 22:56:55 event.app_repeat -- event/event.sh@39 -- # killprocess 750107 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 750107 ']' 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 750107 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 750107 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 750107' 00:06:03.334 killing process with pid 750107 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@968 -- # kill 750107 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@973 -- # wait 750107 00:06:03.334 spdk_app_start is called in Round 0. 00:06:03.334 Shutdown signal received, stop current app iteration 00:06:03.334 Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 reinitialization... 00:06:03.334 spdk_app_start is called in Round 1. 00:06:03.334 Shutdown signal received, stop current app iteration 00:06:03.334 Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 reinitialization... 00:06:03.334 spdk_app_start is called in Round 2. 00:06:03.334 Shutdown signal received, stop current app iteration 00:06:03.334 Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 reinitialization... 00:06:03.334 spdk_app_start is called in Round 3. 00:06:03.334 Shutdown signal received, stop current app iteration 00:06:03.334 22:56:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:03.334 22:56:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:03.334 00:06:03.334 real 0m16.119s 00:06:03.334 user 0m34.846s 00:06:03.334 sys 0m2.315s 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.334 22:56:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.334 ************************************ 00:06:03.334 END TEST app_repeat 00:06:03.334 ************************************ 00:06:03.334 22:56:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:03.334 22:56:55 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.334 22:56:55 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.334 22:56:55 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.334 22:56:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.334 ************************************ 00:06:03.335 START TEST cpu_locks 00:06:03.335 ************************************ 00:06:03.335 22:56:55 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.335 * Looking for test storage... 00:06:03.335 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:03.335 22:56:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:03.335 22:56:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:03.335 22:56:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:03.335 22:56:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:03.335 22:56:55 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.335 22:56:55 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.335 22:56:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.335 ************************************ 00:06:03.335 START TEST default_locks 00:06:03.335 ************************************ 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=752992 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 752992 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 752992 ']' 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.335 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.335 [2024-06-07 22:56:55.557250] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:03.335 [2024-06-07 22:56:55.557291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752992 ] 00:06:03.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.595 [2024-06-07 22:56:55.617371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.595 [2024-06-07 22:56:55.696623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.163 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:04.163 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:04.163 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 752992 00:06:04.163 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 752992 00:06:04.163 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.422 lslocks: write error 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 752992 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 752992 ']' 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 752992 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 752992 00:06:04.422 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:04.423 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:04.423 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 752992' 00:06:04.423 killing process with pid 752992 00:06:04.423 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 752992 00:06:04.423 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 752992 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 752992 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 752992 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 752992 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 752992 ']' 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.991 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (752992) - No such process 00:06:04.991 ERROR: process (pid: 752992) is no longer running 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.991 22:56:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.991 22:56:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.991 22:56:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.991 00:06:04.991 real 0m1.481s 00:06:04.991 user 0m1.550s 00:06:04.991 sys 0m0.467s 00:06:04.991 22:56:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.991 22:56:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.991 ************************************ 00:06:04.991 END TEST default_locks 00:06:04.991 ************************************ 00:06:04.991 22:56:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.991 22:56:57 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.991 22:56:57 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.992 22:56:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.992 ************************************ 00:06:04.992 START TEST default_locks_via_rpc 00:06:04.992 ************************************ 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=753363 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 753363 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 753363 ']' 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.992 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.992 [2024-06-07 22:56:57.116864] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:04.992 [2024-06-07 22:56:57.116902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753363 ] 00:06:04.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.992 [2024-06-07 22:56:57.172198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.992 [2024-06-07 22:56:57.250589] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 753363 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 753363 00:06:05.927 22:56:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 753363 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 753363 ']' 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 753363 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 753363 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 753363' 00:06:06.186 killing process with pid 753363 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 753363 00:06:06.186 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 753363 00:06:06.444 00:06:06.444 real 0m1.527s 00:06:06.444 user 0m1.611s 00:06:06.444 sys 0m0.485s 00:06:06.444 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.444 22:56:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.444 ************************************ 00:06:06.444 END TEST default_locks_via_rpc 00:06:06.444 ************************************ 00:06:06.444 22:56:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:06.444 22:56:58 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:06.444 22:56:58 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.444 22:56:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.444 ************************************ 00:06:06.444 START TEST non_locking_app_on_locked_coremask 00:06:06.444 ************************************ 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=753626 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 753626 /var/tmp/spdk.sock 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 753626 ']' 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.444 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.444 [2024-06-07 22:56:58.707623] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:06.444 [2024-06-07 22:56:58.707660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753626 ] 00:06:06.702 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.702 [2024-06-07 22:56:58.765381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.702 [2024-06-07 22:56:58.843752] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=753655 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 753655 /var/tmp/spdk2.sock 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 753655 ']' 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:07.269 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.269 [2024-06-07 22:56:59.541861] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:07.269 [2024-06-07 22:56:59.541909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753655 ] 00:06:07.528 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.528 [2024-06-07 22:56:59.622459] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.528 [2024-06-07 22:56:59.622486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.528 [2024-06-07 22:56:59.773786] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.096 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:08.096 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:08.096 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 753626 00:06:08.096 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 753626 00:06:08.096 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.662 lslocks: write error 00:06:08.662 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 753626 00:06:08.662 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 753626 ']' 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 753626 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 753626 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 753626' 00:06:08.663 killing process with pid 753626 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 753626 00:06:08.663 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 753626 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 753655 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 753655 ']' 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 753655 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 753655 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 753655' 00:06:09.599 killing process with pid 753655 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 753655 00:06:09.599 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 753655 00:06:09.859 00:06:09.859 real 0m3.238s 00:06:09.859 user 0m3.463s 00:06:09.859 sys 0m0.939s 00:06:09.859 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.859 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.859 ************************************ 00:06:09.859 END TEST non_locking_app_on_locked_coremask 00:06:09.859 ************************************ 00:06:09.859 22:57:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:09.859 22:57:01 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:09.859 22:57:01 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.859 22:57:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.859 ************************************ 00:06:09.859 START TEST locking_app_on_unlocked_coremask 00:06:09.859 ************************************ 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=754247 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 754247 /var/tmp/spdk.sock 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 754247 ']' 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.859 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.859 [2024-06-07 22:57:02.003834] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:09.859 [2024-06-07 22:57:02.003869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754247 ] 00:06:09.859 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.859 [2024-06-07 22:57:02.063154] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.859 [2024-06-07 22:57:02.063178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.117 [2024-06-07 22:57:02.142304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=754485 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 754485 /var/tmp/spdk2.sock 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 754485 ']' 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:10.685 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.685 [2024-06-07 22:57:02.830722] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:10.685 [2024-06-07 22:57:02.830766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754485 ] 00:06:10.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.685 [2024-06-07 22:57:02.908308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.944 [2024-06-07 22:57:03.053154] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.510 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:11.510 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:11.510 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 754485 00:06:11.510 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 754485 00:06:11.510 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.768 lslocks: write error 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 754247 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 754247 ']' 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 754247 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 754247 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 754247' 00:06:11.768 killing process with pid 754247 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 754247 00:06:11.768 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 754247 00:06:12.335 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 754485 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 754485 ']' 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 754485 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 754485 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 754485' 00:06:12.336 killing process with pid 754485 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 754485 00:06:12.336 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 754485 00:06:12.594 00:06:12.594 real 0m2.907s 00:06:12.594 user 0m3.102s 00:06:12.594 sys 0m0.776s 00:06:12.594 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.594 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.594 ************************************ 00:06:12.594 END TEST locking_app_on_unlocked_coremask 00:06:12.594 ************************************ 00:06:12.854 22:57:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:12.854 22:57:04 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:12.854 22:57:04 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.854 22:57:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.854 ************************************ 00:06:12.854 START TEST locking_app_on_locked_coremask 00:06:12.854 ************************************ 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=754762 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 754762 /var/tmp/spdk.sock 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 754762 ']' 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:12.854 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.854 [2024-06-07 22:57:04.987250] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:12.854 [2024-06-07 22:57:04.987293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754762 ] 00:06:12.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.854 [2024-06-07 22:57:05.047213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.854 [2024-06-07 22:57:05.125623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=754990 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 754990 /var/tmp/spdk2.sock 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 754990 /var/tmp/spdk2.sock 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 754990 /var/tmp/spdk2.sock 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 754990 ']' 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.791 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.791 [2024-06-07 22:57:05.819265] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:13.791 [2024-06-07 22:57:05.819314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754990 ] 00:06:13.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.791 [2024-06-07 22:57:05.899928] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 754762 has claimed it. 00:06:13.791 [2024-06-07 22:57:05.899963] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:14.357 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (754990) - No such process 00:06:14.357 ERROR: process (pid: 754990) is no longer running 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 754762 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 754762 00:06:14.357 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.925 lslocks: write error 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 754762 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 754762 ']' 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 754762 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 754762 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 754762' 00:06:14.925 killing process with pid 754762 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 754762 00:06:14.925 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 754762 00:06:15.184 00:06:15.184 real 0m2.362s 00:06:15.184 user 0m2.609s 00:06:15.184 sys 0m0.639s 00:06:15.184 22:57:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.184 22:57:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.185 ************************************ 00:06:15.185 END TEST locking_app_on_locked_coremask 00:06:15.185 ************************************ 00:06:15.185 22:57:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.185 22:57:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.185 22:57:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.185 22:57:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.185 ************************************ 00:06:15.185 START TEST locking_overlapped_coremask 00:06:15.185 ************************************ 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=755248 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 755248 /var/tmp/spdk.sock 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 755248 ']' 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.185 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.185 [2024-06-07 22:57:07.413342] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:15.185 [2024-06-07 22:57:07.413378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755248 ] 00:06:15.185 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.443 [2024-06-07 22:57:07.471200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.443 [2024-06-07 22:57:07.551578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.443 [2024-06-07 22:57:07.551676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.443 [2024-06-07 22:57:07.551678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=755607 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 755607 /var/tmp/spdk2.sock 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 755607 /var/tmp/spdk2.sock 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 755607 /var/tmp/spdk2.sock 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 755607 ']' 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:16.038 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.038 [2024-06-07 22:57:08.256148] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:16.038 [2024-06-07 22:57:08.256211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755607 ] 00:06:16.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.296 [2024-06-07 22:57:08.341795] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 755248 has claimed it. 00:06:16.296 [2024-06-07 22:57:08.341830] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.863 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (755607) - No such process 00:06:16.863 ERROR: process (pid: 755607) is no longer running 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 755248 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 755248 ']' 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 755248 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 755248 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 755248' 00:06:16.863 killing process with pid 755248 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 755248 00:06:16.863 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 755248 00:06:17.122 00:06:17.122 real 0m1.881s 00:06:17.122 user 0m5.301s 00:06:17.122 sys 0m0.402s 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 ************************************ 00:06:17.122 END TEST locking_overlapped_coremask 00:06:17.122 ************************************ 00:06:17.122 22:57:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:17.122 22:57:09 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.122 22:57:09 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.122 22:57:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 ************************************ 00:06:17.122 START TEST locking_overlapped_coremask_via_rpc 00:06:17.122 ************************************ 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=755965 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 755965 /var/tmp/spdk.sock 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 755965 ']' 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.122 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.122 [2024-06-07 22:57:09.366329] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:17.122 [2024-06-07 22:57:09.366377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755965 ] 00:06:17.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.381 [2024-06-07 22:57:09.428007] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.381 [2024-06-07 22:57:09.428038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.381 [2024-06-07 22:57:09.502985] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.381 [2024-06-07 22:57:09.503105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.381 [2024-06-07 22:57:09.503107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=756143 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 756143 /var/tmp/spdk2.sock 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 756143 ']' 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.948 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.948 [2024-06-07 22:57:10.215737] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:17.948 [2024-06-07 22:57:10.215788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756143 ] 00:06:18.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.207 [2024-06-07 22:57:10.301596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.207 [2024-06-07 22:57:10.301624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.207 [2024-06-07 22:57:10.447784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.207 [2024-06-07 22:57:10.451050] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.207 [2024-06-07 22:57:10.451051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.776 [2024-06-07 22:57:11.022077] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 755965 has claimed it. 00:06:18.776 request: 00:06:18.776 { 00:06:18.776 "method": "framework_enable_cpumask_locks", 00:06:18.776 "req_id": 1 00:06:18.776 } 00:06:18.776 Got JSON-RPC error response 00:06:18.776 response: 00:06:18.776 { 00:06:18.776 "code": -32603, 00:06:18.776 "message": "Failed to claim CPU core: 2" 00:06:18.776 } 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 755965 /var/tmp/spdk.sock 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 755965 ']' 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.776 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 756143 /var/tmp/spdk2.sock 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 756143 ']' 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:19.035 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.294 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.294 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:19.294 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:19.294 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.294 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.294 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.294 00:06:19.295 real 0m2.084s 00:06:19.295 user 0m0.860s 00:06:19.295 sys 0m0.152s 00:06:19.295 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.295 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.295 ************************************ 00:06:19.295 END TEST locking_overlapped_coremask_via_rpc 00:06:19.295 ************************************ 00:06:19.295 22:57:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:19.295 22:57:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 755965 ]] 00:06:19.295 22:57:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 755965 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 755965 ']' 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 755965 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 755965 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 755965' 00:06:19.295 killing process with pid 755965 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 755965 00:06:19.295 22:57:11 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 755965 00:06:19.554 22:57:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 756143 ]] 00:06:19.554 22:57:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 756143 00:06:19.554 22:57:11 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 756143 ']' 00:06:19.554 22:57:11 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 756143 00:06:19.554 22:57:11 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:19.554 22:57:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:19.554 22:57:11 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 756143 00:06:19.813 22:57:11 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:19.813 22:57:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:19.813 22:57:11 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 756143' 00:06:19.813 killing process with pid 756143 00:06:19.813 22:57:11 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 756143 00:06:19.813 22:57:11 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 756143 00:06:20.072 22:57:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.072 22:57:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.072 22:57:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 755965 ]] 00:06:20.072 22:57:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 755965 00:06:20.072 22:57:12 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 755965 ']' 00:06:20.072 22:57:12 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 755965 00:06:20.072 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (755965) - No such process 00:06:20.072 22:57:12 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 755965 is not found' 00:06:20.072 Process with pid 755965 is not found 00:06:20.072 22:57:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 756143 ]] 00:06:20.072 22:57:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 756143 00:06:20.072 22:57:12 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 756143 ']' 00:06:20.073 22:57:12 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 756143 00:06:20.073 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (756143) - No such process 00:06:20.073 22:57:12 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 756143 is not found' 00:06:20.073 Process with pid 756143 is not found 00:06:20.073 22:57:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.073 00:06:20.073 real 0m16.756s 00:06:20.073 user 0m28.894s 00:06:20.073 sys 0m4.753s 00:06:20.073 22:57:12 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.073 22:57:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.073 ************************************ 00:06:20.073 END TEST cpu_locks 00:06:20.073 ************************************ 00:06:20.073 00:06:20.073 real 0m41.947s 00:06:20.073 user 1m20.522s 00:06:20.073 sys 0m7.978s 00:06:20.073 22:57:12 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.073 22:57:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.073 ************************************ 00:06:20.073 END TEST event 00:06:20.073 ************************************ 00:06:20.073 22:57:12 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:20.073 22:57:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:20.073 22:57:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.073 22:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:20.073 ************************************ 00:06:20.073 START TEST thread 00:06:20.073 ************************************ 00:06:20.073 22:57:12 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:20.073 * Looking for test storage... 00:06:20.073 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:20.073 22:57:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.073 22:57:12 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:20.073 22:57:12 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.073 22:57:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.332 ************************************ 00:06:20.332 START TEST thread_poller_perf 00:06:20.332 ************************************ 00:06:20.332 22:57:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.332 [2024-06-07 22:57:12.389048] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:20.332 [2024-06-07 22:57:12.389114] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756694 ] 00:06:20.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.332 [2024-06-07 22:57:12.451759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.332 [2024-06-07 22:57:12.524360] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.332 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:21.708 ====================================== 00:06:21.708 busy:2104897180 (cyc) 00:06:21.708 total_run_count: 409000 00:06:21.708 tsc_hz: 2100000000 (cyc) 00:06:21.708 ====================================== 00:06:21.708 poller_cost: 5146 (cyc), 2450 (nsec) 00:06:21.708 00:06:21.708 real 0m1.232s 00:06:21.708 user 0m1.146s 00:06:21.708 sys 0m0.083s 00:06:21.708 22:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.708 22:57:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.708 ************************************ 00:06:21.708 END TEST thread_poller_perf 00:06:21.708 ************************************ 00:06:21.708 22:57:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.708 22:57:13 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:21.708 22:57:13 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.708 22:57:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.708 ************************************ 00:06:21.708 START TEST thread_poller_perf 00:06:21.708 ************************************ 00:06:21.708 22:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.708 [2024-06-07 22:57:13.690787] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:21.708 [2024-06-07 22:57:13.690854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756942 ] 00:06:21.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.708 [2024-06-07 22:57:13.755926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.708 [2024-06-07 22:57:13.828503] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.708 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:22.645 ====================================== 00:06:22.645 busy:2101341142 (cyc) 00:06:22.645 total_run_count: 5526000 00:06:22.645 tsc_hz: 2100000000 (cyc) 00:06:22.645 ====================================== 00:06:22.645 poller_cost: 380 (cyc), 180 (nsec) 00:06:22.645 00:06:22.645 real 0m1.227s 00:06:22.645 user 0m1.143s 00:06:22.645 sys 0m0.080s 00:06:22.645 22:57:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.645 22:57:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.645 ************************************ 00:06:22.645 END TEST thread_poller_perf 00:06:22.645 ************************************ 00:06:22.905 22:57:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.905 00:06:22.905 real 0m2.677s 00:06:22.905 user 0m2.377s 00:06:22.905 sys 0m0.310s 00:06:22.905 22:57:14 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.905 22:57:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.905 ************************************ 00:06:22.905 END TEST thread 00:06:22.905 ************************************ 00:06:22.905 22:57:14 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:22.905 22:57:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:22.905 22:57:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.905 22:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:22.905 ************************************ 00:06:22.905 START TEST accel 00:06:22.905 ************************************ 00:06:22.905 22:57:14 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:22.905 * Looking for test storage... 00:06:22.905 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:22.905 22:57:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:22.905 22:57:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:22.905 22:57:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.905 22:57:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=757235 00:06:22.905 22:57:15 accel -- accel/accel.sh@63 -- # waitforlisten 757235 00:06:22.905 22:57:15 accel -- common/autotest_common.sh@830 -- # '[' -z 757235 ']' 00:06:22.905 22:57:15 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.905 22:57:15 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:22.905 22:57:15 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:22.905 22:57:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:22.905 22:57:15 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.905 22:57:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.905 22:57:15 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:22.905 22:57:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.905 22:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.905 22:57:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.905 22:57:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.905 22:57:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.905 22:57:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:22.905 22:57:15 accel -- accel/accel.sh@41 -- # jq -r . 00:06:22.905 [2024-06-07 22:57:15.130540] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:22.905 [2024-06-07 22:57:15.130581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757235 ] 00:06:22.905 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.164 [2024-06-07 22:57:15.191471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.164 [2024-06-07 22:57:15.263529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@863 -- # return 0 00:06:23.732 22:57:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:23.732 22:57:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:23.732 22:57:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:23.732 22:57:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:23.732 22:57:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:23.732 22:57:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:23.732 22:57:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:23.732 22:57:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:23.732 22:57:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:23.732 22:57:15 accel -- accel/accel.sh@75 -- # killprocess 757235 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@949 -- # '[' -z 757235 ']' 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@953 -- # kill -0 757235 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@954 -- # uname 00:06:23.732 22:57:15 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:23.733 22:57:15 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 757235 00:06:23.991 22:57:16 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:23.991 22:57:16 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:23.991 22:57:16 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 757235' 00:06:23.991 killing process with pid 757235 00:06:23.991 22:57:16 accel -- common/autotest_common.sh@968 -- # kill 757235 00:06:23.991 22:57:16 accel -- common/autotest_common.sh@973 -- # wait 757235 00:06:24.250 22:57:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:24.250 22:57:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:24.250 22:57:16 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:24.250 22:57:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.250 22:57:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.250 22:57:16 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:24.250 22:57:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:24.250 22:57:16 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.250 22:57:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:24.250 22:57:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:24.250 22:57:16 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:24.250 22:57:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.250 22:57:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.250 ************************************ 00:06:24.250 START TEST accel_missing_filename 00:06:24.250 ************************************ 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.251 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:24.251 22:57:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:24.251 [2024-06-07 22:57:16.479790] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:24.251 [2024-06-07 22:57:16.479844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757499 ] 00:06:24.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.510 [2024-06-07 22:57:16.544478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.510 [2024-06-07 22:57:16.619498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.510 [2024-06-07 22:57:16.660563] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.510 [2024-06-07 22:57:16.720142] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:24.769 A filename is required. 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:24.769 00:06:24.769 real 0m0.341s 00:06:24.769 user 0m0.245s 00:06:24.769 sys 0m0.132s 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.769 22:57:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:24.769 ************************************ 00:06:24.769 END TEST accel_missing_filename 00:06:24.769 ************************************ 00:06:24.769 22:57:16 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:24.769 22:57:16 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:24.769 22:57:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.769 22:57:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.769 ************************************ 00:06:24.769 START TEST accel_compress_verify 00:06:24.769 ************************************ 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.769 22:57:16 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:24.769 22:57:16 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:24.769 [2024-06-07 22:57:16.887484] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:24.769 [2024-06-07 22:57:16.887547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757525 ] 00:06:24.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.769 [2024-06-07 22:57:16.951948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.769 [2024-06-07 22:57:17.021578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.029 [2024-06-07 22:57:17.062426] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.029 [2024-06-07 22:57:17.120995] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:25.029 00:06:25.029 Compression does not support the verify option, aborting. 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:25.029 00:06:25.029 real 0m0.333s 00:06:25.029 user 0m0.250s 00:06:25.029 sys 0m0.122s 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.029 22:57:17 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:25.029 ************************************ 00:06:25.029 END TEST accel_compress_verify 00:06:25.029 ************************************ 00:06:25.029 22:57:17 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:25.029 22:57:17 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:25.029 22:57:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.029 22:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.029 ************************************ 00:06:25.029 START TEST accel_wrong_workload 00:06:25.029 ************************************ 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:25.029 22:57:17 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:25.029 Unsupported workload type: foobar 00:06:25.029 [2024-06-07 22:57:17.283038] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:25.029 accel_perf options: 00:06:25.029 [-h help message] 00:06:25.029 [-q queue depth per core] 00:06:25.029 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:25.029 [-T number of threads per core 00:06:25.029 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:25.029 [-t time in seconds] 00:06:25.029 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:25.029 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:25.029 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:25.029 [-l for compress/decompress workloads, name of uncompressed input file 00:06:25.029 [-S for crc32c workload, use this seed value (default 0) 00:06:25.029 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:25.029 [-f for fill workload, use this BYTE value (default 255) 00:06:25.029 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:25.029 [-y verify result if this switch is on] 00:06:25.029 [-a tasks to allocate per core (default: same value as -q)] 00:06:25.029 Can be used to spread operations across a wider range of memory. 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:25.029 00:06:25.029 real 0m0.032s 00:06:25.029 user 0m0.022s 00:06:25.029 sys 0m0.010s 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.029 22:57:17 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:25.029 ************************************ 00:06:25.029 END TEST accel_wrong_workload 00:06:25.029 ************************************ 00:06:25.029 Error: writing output failed: Broken pipe 00:06:25.289 22:57:17 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:25.289 22:57:17 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:25.289 22:57:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.289 22:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.289 ************************************ 00:06:25.289 START TEST accel_negative_buffers 00:06:25.289 ************************************ 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:25.289 22:57:17 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:25.289 -x option must be non-negative. 00:06:25.289 [2024-06-07 22:57:17.382673] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:25.289 accel_perf options: 00:06:25.289 [-h help message] 00:06:25.289 [-q queue depth per core] 00:06:25.289 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:25.289 [-T number of threads per core 00:06:25.289 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:25.289 [-t time in seconds] 00:06:25.289 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:25.289 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:25.289 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:25.289 [-l for compress/decompress workloads, name of uncompressed input file 00:06:25.289 [-S for crc32c workload, use this seed value (default 0) 00:06:25.289 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:25.289 [-f for fill workload, use this BYTE value (default 255) 00:06:25.289 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:25.289 [-y verify result if this switch is on] 00:06:25.289 [-a tasks to allocate per core (default: same value as -q)] 00:06:25.289 Can be used to spread operations across a wider range of memory. 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:25.289 00:06:25.289 real 0m0.034s 00:06:25.289 user 0m0.017s 00:06:25.289 sys 0m0.017s 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.289 22:57:17 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:25.289 ************************************ 00:06:25.289 END TEST accel_negative_buffers 00:06:25.289 ************************************ 00:06:25.289 Error: writing output failed: Broken pipe 00:06:25.289 22:57:17 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:25.289 22:57:17 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:25.289 22:57:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.289 22:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.289 ************************************ 00:06:25.289 START TEST accel_crc32c 00:06:25.289 ************************************ 00:06:25.289 22:57:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:25.289 22:57:17 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:25.289 [2024-06-07 22:57:17.477527] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:25.289 [2024-06-07 22:57:17.477574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757647 ] 00:06:25.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.289 [2024-06-07 22:57:17.536472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.549 [2024-06-07 22:57:17.609630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.549 22:57:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:26.928 22:57:18 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.928 00:06:26.928 real 0m1.340s 00:06:26.928 user 0m1.237s 00:06:26.928 sys 0m0.116s 00:06:26.928 22:57:18 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.928 22:57:18 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:26.928 ************************************ 00:06:26.928 END TEST accel_crc32c 00:06:26.928 ************************************ 00:06:26.928 22:57:18 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:26.928 22:57:18 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:26.928 22:57:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.928 22:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.928 ************************************ 00:06:26.928 START TEST accel_crc32c_C2 00:06:26.928 ************************************ 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.928 22:57:18 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:26.928 [2024-06-07 22:57:18.885195] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:26.928 [2024-06-07 22:57:18.885244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757918 ] 00:06:26.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.928 [2024-06-07 22:57:18.945496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.928 [2024-06-07 22:57:19.018457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.928 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.929 22:57:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.308 00:06:28.308 real 0m1.341s 00:06:28.308 user 0m1.239s 00:06:28.308 sys 0m0.116s 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:28.308 22:57:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:28.308 ************************************ 00:06:28.308 END TEST accel_crc32c_C2 00:06:28.308 ************************************ 00:06:28.308 22:57:20 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:28.308 22:57:20 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:28.308 22:57:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.308 22:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.308 ************************************ 00:06:28.308 START TEST accel_copy 00:06:28.308 ************************************ 00:06:28.308 22:57:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:28.308 [2024-06-07 22:57:20.296912] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:28.308 [2024-06-07 22:57:20.296979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758184 ] 00:06:28.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.308 [2024-06-07 22:57:20.357593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.308 [2024-06-07 22:57:20.429989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.308 22:57:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:29.687 22:57:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.687 00:06:29.687 real 0m1.342s 00:06:29.687 user 0m1.234s 00:06:29.687 sys 0m0.120s 00:06:29.687 22:57:21 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.687 22:57:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.687 ************************************ 00:06:29.687 END TEST accel_copy 00:06:29.687 ************************************ 00:06:29.687 22:57:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.687 22:57:21 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:29.687 22:57:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.687 22:57:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.687 ************************************ 00:06:29.687 START TEST accel_fill 00:06:29.687 ************************************ 00:06:29.687 22:57:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:29.687 [2024-06-07 22:57:21.706412] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:29.687 [2024-06-07 22:57:21.706470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758445 ] 00:06:29.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.687 [2024-06-07 22:57:21.768587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.687 [2024-06-07 22:57:21.840343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.687 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.688 22:57:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:31.067 22:57:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.067 00:06:31.067 real 0m1.340s 00:06:31.067 user 0m1.240s 00:06:31.067 sys 0m0.114s 00:06:31.067 22:57:23 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.067 22:57:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:31.067 ************************************ 00:06:31.067 END TEST accel_fill 00:06:31.067 ************************************ 00:06:31.067 22:57:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:31.067 22:57:23 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:31.067 22:57:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.067 22:57:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.067 ************************************ 00:06:31.067 START TEST accel_copy_crc32c 00:06:31.067 ************************************ 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:31.067 [2024-06-07 22:57:23.116567] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:31.067 [2024-06-07 22:57:23.116612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758712 ] 00:06:31.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.067 [2024-06-07 22:57:23.174875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.067 [2024-06-07 22:57:23.246625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.067 22:57:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.444 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.445 00:06:32.445 real 0m1.335s 00:06:32.445 user 0m1.238s 00:06:32.445 sys 0m0.111s 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.445 22:57:24 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:32.445 ************************************ 00:06:32.445 END TEST accel_copy_crc32c 00:06:32.445 ************************************ 00:06:32.445 22:57:24 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:32.445 22:57:24 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:32.445 22:57:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.445 22:57:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.445 ************************************ 00:06:32.445 START TEST accel_copy_crc32c_C2 00:06:32.445 ************************************ 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:32.445 [2024-06-07 22:57:24.522624] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:32.445 [2024-06-07 22:57:24.522692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758971 ] 00:06:32.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.445 [2024-06-07 22:57:24.582140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.445 [2024-06-07 22:57:24.653485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.445 22:57:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.822 00:06:33.822 real 0m1.337s 00:06:33.822 user 0m1.231s 00:06:33.822 sys 0m0.120s 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.822 22:57:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.822 ************************************ 00:06:33.822 END TEST accel_copy_crc32c_C2 00:06:33.822 ************************************ 00:06:33.822 22:57:25 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:33.822 22:57:25 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:33.822 22:57:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.822 22:57:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.822 ************************************ 00:06:33.822 START TEST accel_dualcast 00:06:33.822 ************************************ 00:06:33.822 22:57:25 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:33.822 22:57:25 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:33.822 [2024-06-07 22:57:25.926391] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:33.822 [2024-06-07 22:57:25.926456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759236 ] 00:06:33.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.822 [2024-06-07 22:57:25.986466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.822 [2024-06-07 22:57:26.058127] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.111 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.112 22:57:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:35.072 22:57:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.072 00:06:35.072 real 0m1.341s 00:06:35.072 user 0m1.236s 00:06:35.072 sys 0m0.117s 00:06:35.072 22:57:27 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.072 22:57:27 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:35.072 ************************************ 00:06:35.072 END TEST accel_dualcast 00:06:35.072 ************************************ 00:06:35.072 22:57:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:35.072 22:57:27 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:35.072 22:57:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.072 22:57:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.072 ************************************ 00:06:35.072 START TEST accel_compare 00:06:35.072 ************************************ 00:06:35.072 22:57:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.072 22:57:27 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.073 22:57:27 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.073 22:57:27 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.073 22:57:27 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:35.073 22:57:27 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:35.073 [2024-06-07 22:57:27.335656] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:35.073 [2024-06-07 22:57:27.335703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759534 ] 00:06:35.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.333 [2024-06-07 22:57:27.394996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.333 [2024-06-07 22:57:27.466002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.333 22:57:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:36.711 22:57:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.711 00:06:36.711 real 0m1.338s 00:06:36.711 user 0m1.234s 00:06:36.711 sys 0m0.117s 00:06:36.711 22:57:28 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.711 22:57:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:36.711 ************************************ 00:06:36.711 END TEST accel_compare 00:06:36.711 ************************************ 00:06:36.711 22:57:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:36.711 22:57:28 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:36.711 22:57:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.711 22:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.711 ************************************ 00:06:36.711 START TEST accel_xor 00:06:36.711 ************************************ 00:06:36.711 22:57:28 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:36.711 22:57:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:36.712 [2024-06-07 22:57:28.742941] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:36.712 [2024-06-07 22:57:28.742992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759804 ] 00:06:36.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.712 [2024-06-07 22:57:28.802736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.712 [2024-06-07 22:57:28.875933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.712 22:57:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.091 00:06:38.091 real 0m1.340s 00:06:38.091 user 0m1.229s 00:06:38.091 sys 0m0.126s 00:06:38.091 22:57:30 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.091 22:57:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 ************************************ 00:06:38.091 END TEST accel_xor 00:06:38.091 ************************************ 00:06:38.091 22:57:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:38.091 22:57:30 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:38.091 22:57:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.091 22:57:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 ************************************ 00:06:38.091 START TEST accel_xor 00:06:38.091 ************************************ 00:06:38.091 22:57:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:38.091 [2024-06-07 22:57:30.148899] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:38.091 [2024-06-07 22:57:30.148949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760054 ] 00:06:38.091 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.091 [2024-06-07 22:57:30.209352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.091 [2024-06-07 22:57:30.282409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.091 22:57:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:39.470 22:57:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.470 00:06:39.470 real 0m1.339s 00:06:39.470 user 0m1.235s 00:06:39.470 sys 0m0.117s 00:06:39.470 22:57:31 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.470 22:57:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:39.470 ************************************ 00:06:39.470 END TEST accel_xor 00:06:39.470 ************************************ 00:06:39.470 22:57:31 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:39.470 22:57:31 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:39.470 22:57:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.470 22:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.470 ************************************ 00:06:39.470 START TEST accel_dif_verify 00:06:39.470 ************************************ 00:06:39.470 22:57:31 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:39.470 [2024-06-07 22:57:31.556368] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:39.470 [2024-06-07 22:57:31.556416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760304 ] 00:06:39.470 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.470 [2024-06-07 22:57:31.616718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.470 [2024-06-07 22:57:31.688118] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.470 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.471 22:57:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:40.850 22:57:32 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.850 00:06:40.850 real 0m1.336s 00:06:40.850 user 0m1.231s 00:06:40.850 sys 0m0.121s 00:06:40.850 22:57:32 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.850 22:57:32 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:40.850 ************************************ 00:06:40.850 END TEST accel_dif_verify 00:06:40.850 ************************************ 00:06:40.850 22:57:32 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:40.850 22:57:32 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:40.850 22:57:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.850 22:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.850 ************************************ 00:06:40.850 START TEST accel_dif_generate 00:06:40.850 ************************************ 00:06:40.850 22:57:32 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:40.850 22:57:32 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:40.850 [2024-06-07 22:57:32.962049] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:40.850 [2024-06-07 22:57:32.962097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760554 ] 00:06:40.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.850 [2024-06-07 22:57:33.019706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.850 [2024-06-07 22:57:33.090400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:41.109 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.110 22:57:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:42.047 22:57:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.047 00:06:42.047 real 0m1.333s 00:06:42.047 user 0m1.228s 00:06:42.047 sys 0m0.119s 00:06:42.047 22:57:34 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.047 22:57:34 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:42.047 ************************************ 00:06:42.047 END TEST accel_dif_generate 00:06:42.047 ************************************ 00:06:42.047 22:57:34 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:42.047 22:57:34 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:42.047 22:57:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.047 22:57:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.307 ************************************ 00:06:42.307 START TEST accel_dif_generate_copy 00:06:42.307 ************************************ 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:42.307 [2024-06-07 22:57:34.364477] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:42.307 [2024-06-07 22:57:34.364540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760806 ] 00:06:42.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.307 [2024-06-07 22:57:34.425842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.307 [2024-06-07 22:57:34.497286] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.307 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.308 22:57:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.686 00:06:43.686 real 0m1.339s 00:06:43.686 user 0m1.237s 00:06:43.686 sys 0m0.114s 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.686 22:57:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.686 ************************************ 00:06:43.686 END TEST accel_dif_generate_copy 00:06:43.686 ************************************ 00:06:43.686 22:57:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:43.686 22:57:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:43.686 22:57:35 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:43.686 22:57:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.686 22:57:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.686 ************************************ 00:06:43.686 START TEST accel_comp 00:06:43.686 ************************************ 00:06:43.686 22:57:35 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:43.686 [2024-06-07 22:57:35.771212] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:43.686 [2024-06-07 22:57:35.771275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761053 ] 00:06:43.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.686 [2024-06-07 22:57:35.833025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.686 [2024-06-07 22:57:35.904502] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.686 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:57:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:45.064 22:57:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.064 00:06:45.064 real 0m1.341s 00:06:45.064 user 0m1.232s 00:06:45.064 sys 0m0.123s 00:06:45.064 22:57:37 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.064 22:57:37 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:45.064 ************************************ 00:06:45.064 END TEST accel_comp 00:06:45.064 ************************************ 00:06:45.064 22:57:37 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:45.064 22:57:37 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:45.064 22:57:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.064 22:57:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.064 ************************************ 00:06:45.064 START TEST accel_decomp 00:06:45.064 ************************************ 00:06:45.064 22:57:37 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:45.064 [2024-06-07 22:57:37.170779] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:45.064 [2024-06-07 22:57:37.170833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761305 ] 00:06:45.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.064 [2024-06-07 22:57:37.221837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.064 [2024-06-07 22:57:37.295089] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.064 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:45.323 22:57:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.257 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.258 22:57:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.258 00:06:46.258 real 0m1.322s 00:06:46.258 user 0m1.227s 00:06:46.258 sys 0m0.110s 00:06:46.258 22:57:38 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.258 22:57:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:46.258 ************************************ 00:06:46.258 END TEST accel_decomp 00:06:46.258 ************************************ 00:06:46.258 22:57:38 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.258 22:57:38 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:46.258 22:57:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.258 22:57:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.517 ************************************ 00:06:46.517 START TEST accel_decomp_full 00:06:46.517 ************************************ 00:06:46.517 22:57:38 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:46.517 [2024-06-07 22:57:38.570342] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:46.517 [2024-06-07 22:57:38.570388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761558 ] 00:06:46.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.517 [2024-06-07 22:57:38.629422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.517 [2024-06-07 22:57:38.700785] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.517 22:57:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.892 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.893 22:57:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.893 00:06:47.893 real 0m1.346s 00:06:47.893 user 0m1.238s 00:06:47.893 sys 0m0.121s 00:06:47.893 22:57:39 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.893 22:57:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:47.893 ************************************ 00:06:47.893 END TEST accel_decomp_full 00:06:47.893 ************************************ 00:06:47.893 22:57:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.893 22:57:39 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:47.893 22:57:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.893 22:57:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.893 ************************************ 00:06:47.893 START TEST accel_decomp_mcore 00:06:47.893 ************************************ 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:47.893 22:57:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:47.893 [2024-06-07 22:57:39.985002] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:47.893 [2024-06-07 22:57:39.985076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761804 ] 00:06:47.893 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.893 [2024-06-07 22:57:40.048041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.893 [2024-06-07 22:57:40.124720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.893 [2024-06-07 22:57:40.124817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.893 [2024-06-07 22:57:40.124919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.893 [2024-06-07 22:57:40.124921] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.893 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.893 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.893 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.893 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.893 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.152 22:57:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.089 22:57:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.089 00:06:49.089 real 0m1.358s 00:06:49.090 user 0m4.573s 00:06:49.090 sys 0m0.127s 00:06:49.090 22:57:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.090 22:57:41 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:49.090 ************************************ 00:06:49.090 END TEST accel_decomp_mcore 00:06:49.090 ************************************ 00:06:49.090 22:57:41 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.090 22:57:41 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:49.090 22:57:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.090 22:57:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.349 ************************************ 00:06:49.349 START TEST accel_decomp_full_mcore 00:06:49.349 ************************************ 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:49.349 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:49.350 [2024-06-07 22:57:41.411370] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:49.350 [2024-06-07 22:57:41.411437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762062 ] 00:06:49.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.350 [2024-06-07 22:57:41.470699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.350 [2024-06-07 22:57:41.544610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.350 [2024-06-07 22:57:41.544709] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.350 [2024-06-07 22:57:41.544797] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.350 [2024-06-07 22:57:41.544799] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.350 22:57:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.726 00:06:50.726 real 0m1.362s 00:06:50.726 user 0m4.606s 00:06:50.726 sys 0m0.131s 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:50.726 22:57:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:50.726 ************************************ 00:06:50.726 END TEST accel_decomp_full_mcore 00:06:50.726 ************************************ 00:06:50.726 22:57:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.726 22:57:42 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:50.726 22:57:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.726 22:57:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.726 ************************************ 00:06:50.726 START TEST accel_decomp_mthread 00:06:50.726 ************************************ 00:06:50.726 22:57:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.726 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:50.726 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:50.726 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.726 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:50.727 22:57:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:50.727 [2024-06-07 22:57:42.839482] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:50.727 [2024-06-07 22:57:42.839531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762311 ] 00:06:50.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.727 [2024-06-07 22:57:42.898777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.727 [2024-06-07 22:57:42.969906] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.984 22:57:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.920 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.921 00:06:51.921 real 0m1.342s 00:06:51.921 user 0m1.232s 00:06:51.921 sys 0m0.125s 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.921 22:57:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:51.921 ************************************ 00:06:51.921 END TEST accel_decomp_mthread 00:06:51.921 ************************************ 00:06:51.921 22:57:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.921 22:57:44 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:51.921 22:57:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.921 22:57:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.181 ************************************ 00:06:52.181 START TEST accel_decomp_full_mthread 00:06:52.181 ************************************ 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:52.181 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:52.181 [2024-06-07 22:57:44.246088] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:52.181 [2024-06-07 22:57:44.246143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762558 ] 00:06:52.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.182 [2024-06-07 22:57:44.309521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.182 [2024-06-07 22:57:44.379343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.182 22:57:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.560 00:06:53.560 real 0m1.367s 00:06:53.560 user 0m1.255s 00:06:53.560 sys 0m0.125s 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.560 22:57:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:53.560 ************************************ 00:06:53.560 END TEST accel_decomp_full_mthread 00:06:53.560 ************************************ 00:06:53.560 22:57:45 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:53.560 22:57:45 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:53.560 22:57:45 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:53.560 22:57:45 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:53.560 22:57:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.561 22:57:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:53.561 22:57:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.561 22:57:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.561 22:57:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.561 22:57:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.561 22:57:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.561 22:57:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:53.561 22:57:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:53.561 ************************************ 00:06:53.561 START TEST accel_dif_functional_tests 00:06:53.561 ************************************ 00:06:53.561 22:57:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:53.561 [2024-06-07 22:57:45.701297] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:53.561 [2024-06-07 22:57:45.701331] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762812 ] 00:06:53.561 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.561 [2024-06-07 22:57:45.757429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.561 [2024-06-07 22:57:45.830203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.561 [2024-06-07 22:57:45.830298] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.561 [2024-06-07 22:57:45.830300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.820 00:06:53.820 00:06:53.820 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.820 http://cunit.sourceforge.net/ 00:06:53.820 00:06:53.820 00:06:53.820 Suite: accel_dif 00:06:53.820 Test: verify: DIF generated, GUARD check ...passed 00:06:53.820 Test: verify: DIF generated, APPTAG check ...passed 00:06:53.820 Test: verify: DIF generated, REFTAG check ...passed 00:06:53.820 Test: verify: DIF not generated, GUARD check ...[2024-06-07 22:57:45.897959] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:53.820 passed 00:06:53.820 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 22:57:45.898001] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:53.820 passed 00:06:53.820 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 22:57:45.898040] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:53.820 passed 00:06:53.820 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:53.820 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 22:57:45.898081] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:53.820 passed 00:06:53.820 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:53.820 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:53.820 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:53.820 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 22:57:45.898175] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:53.820 passed 00:06:53.820 Test: verify copy: DIF generated, GUARD check ...passed 00:06:53.820 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:53.820 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:53.820 Test: verify copy: DIF not generated, GUARD check ...[2024-06-07 22:57:45.898278] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:53.820 passed 00:06:53.820 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-07 22:57:45.898299] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:53.820 passed 00:06:53.820 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-07 22:57:45.898319] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:53.820 passed 00:06:53.820 Test: generate copy: DIF generated, GUARD check ...passed 00:06:53.820 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:53.820 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:53.820 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:53.820 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:53.820 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:53.820 Test: generate copy: iovecs-len validate ...[2024-06-07 22:57:45.898473] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:53.820 passed 00:06:53.820 Test: generate copy: buffer alignment validate ...passed 00:06:53.820 00:06:53.820 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.820 suites 1 1 n/a 0 0 00:06:53.820 tests 26 26 26 0 0 00:06:53.820 asserts 115 115 115 0 n/a 00:06:53.820 00:06:53.820 Elapsed time = 0.002 seconds 00:06:53.820 00:06:53.820 real 0m0.407s 00:06:53.820 user 0m0.611s 00:06:53.820 sys 0m0.148s 00:06:53.820 22:57:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.820 22:57:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:53.820 ************************************ 00:06:53.820 END TEST accel_dif_functional_tests 00:06:53.820 ************************************ 00:06:54.081 00:06:54.081 real 0m31.105s 00:06:54.081 user 0m34.751s 00:06:54.081 sys 0m4.413s 00:06:54.081 22:57:46 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.081 22:57:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.081 ************************************ 00:06:54.081 END TEST accel 00:06:54.081 ************************************ 00:06:54.081 22:57:46 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:54.081 22:57:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:54.081 22:57:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.081 22:57:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.081 ************************************ 00:06:54.081 START TEST accel_rpc 00:06:54.081 ************************************ 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:54.081 * Looking for test storage... 00:06:54.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:54.081 22:57:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.081 22:57:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=762879 00:06:54.081 22:57:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 762879 00:06:54.081 22:57:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 762879 ']' 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:54.081 22:57:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.081 [2024-06-07 22:57:46.292351] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:54.081 [2024-06-07 22:57:46.292399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762879 ] 00:06:54.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.081 [2024-06-07 22:57:46.352972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.375 [2024-06-07 22:57:46.434237] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.943 22:57:47 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:54.943 22:57:47 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:54.943 22:57:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:54.943 22:57:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:54.943 22:57:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:54.943 22:57:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:54.943 22:57:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:54.943 22:57:47 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:54.943 22:57:47 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.943 22:57:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.943 ************************************ 00:06:54.943 START TEST accel_assign_opcode 00:06:54.943 ************************************ 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.943 [2024-06-07 22:57:47.124329] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.943 [2024-06-07 22:57:47.132349] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.943 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.202 software 00:06:55.202 00:06:55.202 real 0m0.230s 00:06:55.202 user 0m0.045s 00:06:55.202 sys 0m0.005s 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.202 22:57:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:55.202 ************************************ 00:06:55.202 END TEST accel_assign_opcode 00:06:55.202 ************************************ 00:06:55.202 22:57:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 762879 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 762879 ']' 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 762879 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 762879 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 762879' 00:06:55.202 killing process with pid 762879 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@968 -- # kill 762879 00:06:55.202 22:57:47 accel_rpc -- common/autotest_common.sh@973 -- # wait 762879 00:06:55.461 00:06:55.461 real 0m1.566s 00:06:55.461 user 0m1.636s 00:06:55.461 sys 0m0.417s 00:06:55.461 22:57:47 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.461 22:57:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.461 ************************************ 00:06:55.461 END TEST accel_rpc 00:06:55.461 ************************************ 00:06:55.719 22:57:47 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:55.719 22:57:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:55.719 22:57:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:55.719 22:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:55.719 ************************************ 00:06:55.719 START TEST app_cmdline 00:06:55.719 ************************************ 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:55.719 * Looking for test storage... 00:06:55.719 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:55.719 22:57:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.719 22:57:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=763296 00:06:55.719 22:57:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 763296 00:06:55.719 22:57:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 763296 ']' 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:55.719 22:57:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.719 [2024-06-07 22:57:47.926718] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:06:55.719 [2024-06-07 22:57:47.926772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763296 ] 00:06:55.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.719 [2024-06-07 22:57:47.986347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.978 [2024-06-07 22:57:48.059413] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.545 22:57:48 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:56.545 22:57:48 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:56.545 22:57:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:56.803 { 00:06:56.803 "version": "SPDK v24.09-pre git sha1 86abcfbbd", 00:06:56.803 "fields": { 00:06:56.803 "major": 24, 00:06:56.803 "minor": 9, 00:06:56.803 "patch": 0, 00:06:56.803 "suffix": "-pre", 00:06:56.803 "commit": "86abcfbbd" 00:06:56.803 } 00:06:56.803 } 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.803 22:57:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:56.803 22:57:48 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.062 request: 00:06:57.062 { 00:06:57.062 "method": "env_dpdk_get_mem_stats", 00:06:57.062 "req_id": 1 00:06:57.062 } 00:06:57.062 Got JSON-RPC error response 00:06:57.062 response: 00:06:57.062 { 00:06:57.062 "code": -32601, 00:06:57.062 "message": "Method not found" 00:06:57.062 } 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:57.062 22:57:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 763296 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 763296 ']' 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 763296 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 763296 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 763296' 00:06:57.062 killing process with pid 763296 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@968 -- # kill 763296 00:06:57.062 22:57:49 app_cmdline -- common/autotest_common.sh@973 -- # wait 763296 00:06:57.329 00:06:57.329 real 0m1.678s 00:06:57.329 user 0m1.998s 00:06:57.329 sys 0m0.438s 00:06:57.329 22:57:49 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.329 22:57:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.329 ************************************ 00:06:57.329 END TEST app_cmdline 00:06:57.329 ************************************ 00:06:57.330 22:57:49 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:57.330 22:57:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:57.330 22:57:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.330 22:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.330 ************************************ 00:06:57.330 START TEST version 00:06:57.330 ************************************ 00:06:57.330 22:57:49 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:57.592 * Looking for test storage... 00:06:57.592 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:57.592 22:57:49 version -- app/version.sh@17 -- # get_header_version major 00:06:57.592 22:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # cut -f2 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.593 22:57:49 version -- app/version.sh@17 -- # major=24 00:06:57.593 22:57:49 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.593 22:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # cut -f2 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.593 22:57:49 version -- app/version.sh@18 -- # minor=9 00:06:57.593 22:57:49 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.593 22:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # cut -f2 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.593 22:57:49 version -- app/version.sh@19 -- # patch=0 00:06:57.593 22:57:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.593 22:57:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # cut -f2 00:06:57.593 22:57:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.593 22:57:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.593 22:57:49 version -- app/version.sh@22 -- # version=24.9 00:06:57.593 22:57:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.593 22:57:49 version -- app/version.sh@28 -- # version=24.9rc0 00:06:57.593 22:57:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:57.593 22:57:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.593 22:57:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:57.593 22:57:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:57.593 00:06:57.593 real 0m0.152s 00:06:57.593 user 0m0.065s 00:06:57.593 sys 0m0.118s 00:06:57.593 22:57:49 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.593 22:57:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 ************************************ 00:06:57.593 END TEST version 00:06:57.593 ************************************ 00:06:57.593 22:57:49 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@198 -- # uname -s 00:06:57.593 22:57:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:57.593 22:57:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:57.593 22:57:49 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:57.593 22:57:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:57.593 22:57:49 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:57.593 22:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 22:57:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:57.593 22:57:49 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:06:57.593 22:57:49 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:57.593 22:57:49 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:57.593 22:57:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.593 22:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 ************************************ 00:06:57.593 START TEST nvmf_rdma 00:06:57.593 ************************************ 00:06:57.593 22:57:49 nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:57.593 * Looking for test storage... 00:06:57.593 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:57.593 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.851 22:57:49 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:57.851 22:57:49 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.851 22:57:49 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.851 22:57:49 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.852 22:57:49 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:49 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:49 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:49 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:06:57.852 22:57:49 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:57.852 22:57:49 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:57.852 22:57:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:57.852 22:57:49 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:57.852 22:57:49 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:57.852 22:57:49 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.852 22:57:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:57.852 ************************************ 00:06:57.852 START TEST nvmf_example 00:06:57.852 ************************************ 00:06:57.852 22:57:49 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:57.852 * Looking for test storage... 00:06:57.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.852 22:57:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.421 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.421 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.421 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.421 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.421 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.421 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:04.422 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:04.422 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:04.422 Found net devices under 0000:da:00.0: mlx_0_0 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:04.422 Found net devices under 0000:da:00.1: mlx_0_1 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.422 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:04.423 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:04.423 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:04.423 altname enp218s0f0np0 00:07:04.423 altname ens818f0np0 00:07:04.423 inet 192.168.100.8/24 scope global mlx_0_0 00:07:04.423 valid_lft forever preferred_lft forever 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:04.423 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:04.423 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:04.423 altname enp218s0f1np1 00:07:04.423 altname ens818f1np1 00:07:04.423 inet 192.168.100.9/24 scope global mlx_0_1 00:07:04.423 valid_lft forever preferred_lft forever 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:04.423 192.168.100.9' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:04.423 192.168.100.9' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:04.423 192.168.100.9' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:04.423 22:57:55 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=767113 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 767113 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 767113 ']' 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:04.423 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.682 22:57:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:04.941 22:57:57 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:04.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.142 Initializing NVMe Controllers 00:07:17.142 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:17.142 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:17.142 Initialization complete. Launching workers. 00:07:17.142 ======================================================== 00:07:17.142 Latency(us) 00:07:17.142 Device Information : IOPS MiB/s Average min max 00:07:17.142 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26894.11 105.06 2379.35 637.12 15075.46 00:07:17.142 ======================================================== 00:07:17.142 Total : 26894.11 105.06 2379.35 637.12 15075.46 00:07:17.142 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.142 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:17.142 rmmod nvme_rdma 00:07:17.143 rmmod nvme_fabrics 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 767113 ']' 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 767113 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 767113 ']' 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 767113 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 767113 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 767113' 00:07:17.143 killing process with pid 767113 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@968 -- # kill 767113 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@973 -- # wait 767113 00:07:17.143 nvmf threads initialize successfully 00:07:17.143 bdev subsystem init successfully 00:07:17.143 created a nvmf target service 00:07:17.143 create targets's poll groups done 00:07:17.143 all subsystems of target started 00:07:17.143 nvmf target is running 00:07:17.143 all subsystems of target stopped 00:07:17.143 destroy targets's poll groups done 00:07:17.143 destroyed the nvmf target service 00:07:17.143 bdev subsystem finish successfully 00:07:17.143 nvmf threads destroy successfully 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:17.143 00:07:17.143 real 0m18.794s 00:07:17.143 user 0m51.790s 00:07:17.143 sys 0m4.930s 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.143 22:58:08 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:17.143 ************************************ 00:07:17.143 END TEST nvmf_example 00:07:17.143 ************************************ 00:07:17.143 22:58:08 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:17.143 22:58:08 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:17.143 22:58:08 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.143 22:58:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:17.143 ************************************ 00:07:17.143 START TEST nvmf_filesystem 00:07:17.143 ************************************ 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:17.143 * Looking for test storage... 00:07:17.143 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:17.143 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:17.144 #define SPDK_CONFIG_H 00:07:17.144 #define SPDK_CONFIG_APPS 1 00:07:17.144 #define SPDK_CONFIG_ARCH native 00:07:17.144 #undef SPDK_CONFIG_ASAN 00:07:17.144 #undef SPDK_CONFIG_AVAHI 00:07:17.144 #undef SPDK_CONFIG_CET 00:07:17.144 #define SPDK_CONFIG_COVERAGE 1 00:07:17.144 #define SPDK_CONFIG_CROSS_PREFIX 00:07:17.144 #undef SPDK_CONFIG_CRYPTO 00:07:17.144 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:17.144 #undef SPDK_CONFIG_CUSTOMOCF 00:07:17.144 #undef SPDK_CONFIG_DAOS 00:07:17.144 #define SPDK_CONFIG_DAOS_DIR 00:07:17.144 #define SPDK_CONFIG_DEBUG 1 00:07:17.144 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:17.144 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:17.144 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:17.144 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:17.144 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:17.144 #undef SPDK_CONFIG_DPDK_UADK 00:07:17.144 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:17.144 #define SPDK_CONFIG_EXAMPLES 1 00:07:17.144 #undef SPDK_CONFIG_FC 00:07:17.144 #define SPDK_CONFIG_FC_PATH 00:07:17.144 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:17.144 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:17.144 #undef SPDK_CONFIG_FUSE 00:07:17.144 #undef SPDK_CONFIG_FUZZER 00:07:17.144 #define SPDK_CONFIG_FUZZER_LIB 00:07:17.144 #undef SPDK_CONFIG_GOLANG 00:07:17.144 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:17.144 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:17.144 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:17.144 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:17.144 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:17.144 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:17.144 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:17.144 #define SPDK_CONFIG_IDXD 1 00:07:17.144 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:17.144 #undef SPDK_CONFIG_IPSEC_MB 00:07:17.144 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:17.144 #define SPDK_CONFIG_ISAL 1 00:07:17.144 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:17.144 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:17.144 #define SPDK_CONFIG_LIBDIR 00:07:17.144 #undef SPDK_CONFIG_LTO 00:07:17.144 #define SPDK_CONFIG_MAX_LCORES 00:07:17.144 #define SPDK_CONFIG_NVME_CUSE 1 00:07:17.144 #undef SPDK_CONFIG_OCF 00:07:17.144 #define SPDK_CONFIG_OCF_PATH 00:07:17.144 #define SPDK_CONFIG_OPENSSL_PATH 00:07:17.144 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:17.144 #define SPDK_CONFIG_PGO_DIR 00:07:17.144 #undef SPDK_CONFIG_PGO_USE 00:07:17.144 #define SPDK_CONFIG_PREFIX /usr/local 00:07:17.144 #undef SPDK_CONFIG_RAID5F 00:07:17.144 #undef SPDK_CONFIG_RBD 00:07:17.144 #define SPDK_CONFIG_RDMA 1 00:07:17.144 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:17.144 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:17.144 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:17.144 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:17.144 #define SPDK_CONFIG_SHARED 1 00:07:17.144 #undef SPDK_CONFIG_SMA 00:07:17.144 #define SPDK_CONFIG_TESTS 1 00:07:17.144 #undef SPDK_CONFIG_TSAN 00:07:17.144 #define SPDK_CONFIG_UBLK 1 00:07:17.144 #define SPDK_CONFIG_UBSAN 1 00:07:17.144 #undef SPDK_CONFIG_UNIT_TESTS 00:07:17.144 #undef SPDK_CONFIG_URING 00:07:17.144 #define SPDK_CONFIG_URING_PATH 00:07:17.144 #undef SPDK_CONFIG_URING_ZNS 00:07:17.144 #undef SPDK_CONFIG_USDT 00:07:17.144 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:17.144 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:17.144 #undef SPDK_CONFIG_VFIO_USER 00:07:17.144 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:17.144 #define SPDK_CONFIG_VHOST 1 00:07:17.144 #define SPDK_CONFIG_VIRTIO 1 00:07:17.144 #undef SPDK_CONFIG_VTUNE 00:07:17.144 #define SPDK_CONFIG_VTUNE_DIR 00:07:17.144 #define SPDK_CONFIG_WERROR 1 00:07:17.144 #define SPDK_CONFIG_WPDK_DIR 00:07:17.144 #undef SPDK_CONFIG_XNVME 00:07:17.144 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:17.144 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:17.145 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 769481 ]] 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 769481 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.dRHBzT 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dRHBzT/tests/target /tmp/spdk.dRHBzT 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:07:17.146 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=185254404096 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974316032 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10719911936 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931522048 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185281024 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9584640 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97985486848 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1671168 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:17.147 * Looking for test storage... 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=185254404096 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12934504448 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.147 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.147 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:17.148 22:58:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.148 22:58:09 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:22.421 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:22.422 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:22.422 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:22.422 Found net devices under 0000:da:00.0: mlx_0_0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:22.422 Found net devices under 0000:da:00.1: mlx_0_1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:22.422 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:22.422 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:22.422 altname enp218s0f0np0 00:07:22.422 altname ens818f0np0 00:07:22.422 inet 192.168.100.8/24 scope global mlx_0_0 00:07:22.422 valid_lft forever preferred_lft forever 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:22.422 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:22.422 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:22.422 altname enp218s0f1np1 00:07:22.422 altname ens818f1np1 00:07:22.422 inet 192.168.100.9/24 scope global mlx_0_1 00:07:22.422 valid_lft forever preferred_lft forever 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:22.422 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:22.423 192.168.100.9' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:22.423 192.168.100.9' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:22.423 192.168.100.9' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.423 ************************************ 00:07:22.423 START TEST nvmf_filesystem_no_in_capsule 00:07:22.423 ************************************ 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=772814 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 772814 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 772814 ']' 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:22.423 22:58:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.423 [2024-06-07 22:58:14.545519] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:22.423 [2024-06-07 22:58:14.545559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.423 [2024-06-07 22:58:14.606359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.423 [2024-06-07 22:58:14.688713] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.423 [2024-06-07 22:58:14.688750] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.423 [2024-06-07 22:58:14.688757] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.423 [2024-06-07 22:58:14.688762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.423 [2024-06-07 22:58:14.688769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.423 [2024-06-07 22:58:14.688812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.423 [2024-06-07 22:58:14.688909] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.423 [2024-06-07 22:58:14.688997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.423 [2024-06-07 22:58:14.688998] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.359 [2024-06-07 22:58:15.387865] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:23.359 [2024-06-07 22:58:15.407940] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x92b9d0/0x92fec0) succeed. 00:07:23.359 [2024-06-07 22:58:15.416954] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x92d010/0x971550) succeed. 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.359 Malloc1 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.359 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.618 [2024-06-07 22:58:15.662899] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.618 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:23.618 { 00:07:23.618 "name": "Malloc1", 00:07:23.618 "aliases": [ 00:07:23.618 "25ae4777-593b-49a0-b57b-e2986014ef85" 00:07:23.618 ], 00:07:23.618 "product_name": "Malloc disk", 00:07:23.618 "block_size": 512, 00:07:23.618 "num_blocks": 1048576, 00:07:23.618 "uuid": "25ae4777-593b-49a0-b57b-e2986014ef85", 00:07:23.618 "assigned_rate_limits": { 00:07:23.618 "rw_ios_per_sec": 0, 00:07:23.618 "rw_mbytes_per_sec": 0, 00:07:23.618 "r_mbytes_per_sec": 0, 00:07:23.618 "w_mbytes_per_sec": 0 00:07:23.618 }, 00:07:23.618 "claimed": true, 00:07:23.618 "claim_type": "exclusive_write", 00:07:23.619 "zoned": false, 00:07:23.619 "supported_io_types": { 00:07:23.619 "read": true, 00:07:23.619 "write": true, 00:07:23.619 "unmap": true, 00:07:23.619 "write_zeroes": true, 00:07:23.619 "flush": true, 00:07:23.619 "reset": true, 00:07:23.619 "compare": false, 00:07:23.619 "compare_and_write": false, 00:07:23.619 "abort": true, 00:07:23.619 "nvme_admin": false, 00:07:23.619 "nvme_io": false 00:07:23.619 }, 00:07:23.619 "memory_domains": [ 00:07:23.619 { 00:07:23.619 "dma_device_id": "system", 00:07:23.619 "dma_device_type": 1 00:07:23.619 }, 00:07:23.619 { 00:07:23.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.619 "dma_device_type": 2 00:07:23.619 } 00:07:23.619 ], 00:07:23.619 "driver_specific": {} 00:07:23.619 } 00:07:23.619 ]' 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.619 22:58:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:24.554 22:58:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.555 22:58:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:24.555 22:58:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.555 22:58:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:24.555 22:58:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.087 22:58:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:27.087 22:58:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.057 ************************************ 00:07:28.057 START TEST filesystem_ext4 00:07:28.057 ************************************ 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.057 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.057 Discarding device blocks: 0/522240 done 00:07:28.057 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.057 Filesystem UUID: 3449e035-c307-4e6a-a0f3-bbdae58626c5 00:07:28.057 Superblock backups stored on blocks: 00:07:28.057 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.057 00:07:28.057 Allocating group tables: 0/64 done 00:07:28.057 Writing inode tables: 0/64 done 00:07:28.057 Creating journal (8192 blocks): done 00:07:28.057 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.057 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:28.057 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 772814 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.058 00:07:28.058 real 0m0.176s 00:07:28.058 user 0m0.030s 00:07:28.058 sys 0m0.059s 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:28.058 ************************************ 00:07:28.058 END TEST filesystem_ext4 00:07:28.058 ************************************ 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.058 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.317 ************************************ 00:07:28.317 START TEST filesystem_btrfs 00:07:28.317 ************************************ 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.317 btrfs-progs v6.6.2 00:07:28.317 See https://btrfs.readthedocs.io for more information. 00:07:28.317 00:07:28.317 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.317 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.317 this does not affect your deployments: 00:07:28.317 - DUP for metadata (-m dup) 00:07:28.317 - enabled no-holes (-O no-holes) 00:07:28.317 - enabled free-space-tree (-R free-space-tree) 00:07:28.317 00:07:28.317 Label: (null) 00:07:28.317 UUID: 7f15002d-45e5-4ec3-a60c-2341e1fde5ed 00:07:28.317 Node size: 16384 00:07:28.317 Sector size: 4096 00:07:28.317 Filesystem size: 510.00MiB 00:07:28.317 Block group profiles: 00:07:28.317 Data: single 8.00MiB 00:07:28.317 Metadata: DUP 32.00MiB 00:07:28.317 System: DUP 8.00MiB 00:07:28.317 SSD detected: yes 00:07:28.317 Zoned device: no 00:07:28.317 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.317 Runtime features: free-space-tree 00:07:28.317 Checksum: crc32c 00:07:28.317 Number of devices: 1 00:07:28.317 Devices: 00:07:28.317 ID SIZE PATH 00:07:28.317 1 510.00MiB /dev/nvme0n1p1 00:07:28.317 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 772814 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.317 00:07:28.317 real 0m0.241s 00:07:28.317 user 0m0.022s 00:07:28.317 sys 0m0.120s 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.317 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.317 ************************************ 00:07:28.317 END TEST filesystem_btrfs 00:07:28.317 ************************************ 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.576 ************************************ 00:07:28.576 START TEST filesystem_xfs 00:07:28.576 ************************************ 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:28.576 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:28.576 = sectsz=512 attr=2, projid32bit=1 00:07:28.576 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:28.576 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:28.576 data = bsize=4096 blocks=130560, imaxpct=25 00:07:28.576 = sunit=0 swidth=0 blks 00:07:28.576 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:28.576 log =internal log bsize=4096 blocks=16384, version=2 00:07:28.576 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:28.576 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:28.576 Discarding blocks...Done. 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 772814 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.576 00:07:28.576 real 0m0.190s 00:07:28.576 user 0m0.018s 00:07:28.576 sys 0m0.072s 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.576 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.576 ************************************ 00:07:28.576 END TEST filesystem_xfs 00:07:28.576 ************************************ 00:07:28.835 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:28.835 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:28.835 22:58:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 772814 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 772814 ']' 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 772814 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 772814 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 772814' 00:07:29.771 killing process with pid 772814 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 772814 00:07:29.771 22:58:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 772814 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:30.339 00:07:30.339 real 0m7.841s 00:07:30.339 user 0m30.581s 00:07:30.339 sys 0m0.997s 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.339 ************************************ 00:07:30.339 END TEST nvmf_filesystem_no_in_capsule 00:07:30.339 ************************************ 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.339 ************************************ 00:07:30.339 START TEST nvmf_filesystem_in_capsule 00:07:30.339 ************************************ 00:07:30.339 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=774242 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 774242 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 774242 ']' 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:30.340 22:58:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.340 [2024-06-07 22:58:22.460894] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:30.340 [2024-06-07 22:58:22.460932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.340 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.340 [2024-06-07 22:58:22.523842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.340 [2024-06-07 22:58:22.598018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.340 [2024-06-07 22:58:22.598062] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.340 [2024-06-07 22:58:22.598068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.340 [2024-06-07 22:58:22.598074] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.340 [2024-06-07 22:58:22.598079] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.340 [2024-06-07 22:58:22.598137] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.340 [2024-06-07 22:58:22.598154] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.340 [2024-06-07 22:58:22.598247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.340 [2024-06-07 22:58:22.598249] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.276 [2024-06-07 22:58:23.330147] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbc29d0/0xbc6ec0) succeed. 00:07:31.276 [2024-06-07 22:58:23.339250] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc4010/0xc08550) succeed. 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.276 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.534 Malloc1 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.534 [2024-06-07 22:58:23.604290] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.534 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:31.535 { 00:07:31.535 "name": "Malloc1", 00:07:31.535 "aliases": [ 00:07:31.535 "55075af0-4cdb-4330-b6af-fcf6875dd1ff" 00:07:31.535 ], 00:07:31.535 "product_name": "Malloc disk", 00:07:31.535 "block_size": 512, 00:07:31.535 "num_blocks": 1048576, 00:07:31.535 "uuid": "55075af0-4cdb-4330-b6af-fcf6875dd1ff", 00:07:31.535 "assigned_rate_limits": { 00:07:31.535 "rw_ios_per_sec": 0, 00:07:31.535 "rw_mbytes_per_sec": 0, 00:07:31.535 "r_mbytes_per_sec": 0, 00:07:31.535 "w_mbytes_per_sec": 0 00:07:31.535 }, 00:07:31.535 "claimed": true, 00:07:31.535 "claim_type": "exclusive_write", 00:07:31.535 "zoned": false, 00:07:31.535 "supported_io_types": { 00:07:31.535 "read": true, 00:07:31.535 "write": true, 00:07:31.535 "unmap": true, 00:07:31.535 "write_zeroes": true, 00:07:31.535 "flush": true, 00:07:31.535 "reset": true, 00:07:31.535 "compare": false, 00:07:31.535 "compare_and_write": false, 00:07:31.535 "abort": true, 00:07:31.535 "nvme_admin": false, 00:07:31.535 "nvme_io": false 00:07:31.535 }, 00:07:31.535 "memory_domains": [ 00:07:31.535 { 00:07:31.535 "dma_device_id": "system", 00:07:31.535 "dma_device_type": 1 00:07:31.535 }, 00:07:31.535 { 00:07:31.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.535 "dma_device_type": 2 00:07:31.535 } 00:07:31.535 ], 00:07:31.535 "driver_specific": {} 00:07:31.535 } 00:07:31.535 ]' 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.535 22:58:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:32.469 22:58:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.470 22:58:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:32.470 22:58:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.470 22:58:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:32.470 22:58:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:34.999 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:34.999 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:34.999 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.999 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:34.999 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.999 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.000 22:58:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:35.932 22:58:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:35.932 22:58:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:35.932 22:58:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:35.932 22:58:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.932 22:58:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.932 ************************************ 00:07:35.932 START TEST filesystem_in_capsule_ext4 00:07:35.932 ************************************ 00:07:35.932 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:35.932 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:35.933 mke2fs 1.46.5 (30-Dec-2021) 00:07:35.933 Discarding device blocks: 0/522240 done 00:07:35.933 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:35.933 Filesystem UUID: ade458e2-5203-497b-b4a1-b6dab81e901a 00:07:35.933 Superblock backups stored on blocks: 00:07:35.933 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:35.933 00:07:35.933 Allocating group tables: 0/64 done 00:07:35.933 Writing inode tables: 0/64 done 00:07:35.933 Creating journal (8192 blocks): done 00:07:35.933 Writing superblocks and filesystem accounting information: 0/64 done 00:07:35.933 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 774242 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.933 00:07:35.933 real 0m0.176s 00:07:35.933 user 0m0.026s 00:07:35.933 sys 0m0.062s 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.933 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:35.933 ************************************ 00:07:35.933 END TEST filesystem_in_capsule_ext4 00:07:35.933 ************************************ 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.191 ************************************ 00:07:36.191 START TEST filesystem_in_capsule_btrfs 00:07:36.191 ************************************ 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:36.191 btrfs-progs v6.6.2 00:07:36.191 See https://btrfs.readthedocs.io for more information. 00:07:36.191 00:07:36.191 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:36.191 NOTE: several default settings have changed in version 5.15, please make sure 00:07:36.191 this does not affect your deployments: 00:07:36.191 - DUP for metadata (-m dup) 00:07:36.191 - enabled no-holes (-O no-holes) 00:07:36.191 - enabled free-space-tree (-R free-space-tree) 00:07:36.191 00:07:36.191 Label: (null) 00:07:36.191 UUID: 6a7ef6da-4657-4a5a-87c8-b689c9e21daf 00:07:36.191 Node size: 16384 00:07:36.191 Sector size: 4096 00:07:36.191 Filesystem size: 510.00MiB 00:07:36.191 Block group profiles: 00:07:36.191 Data: single 8.00MiB 00:07:36.191 Metadata: DUP 32.00MiB 00:07:36.191 System: DUP 8.00MiB 00:07:36.191 SSD detected: yes 00:07:36.191 Zoned device: no 00:07:36.191 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:36.191 Runtime features: free-space-tree 00:07:36.191 Checksum: crc32c 00:07:36.191 Number of devices: 1 00:07:36.191 Devices: 00:07:36.191 ID SIZE PATH 00:07:36.191 1 510.00MiB /dev/nvme0n1p1 00:07:36.191 00:07:36.191 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:36.192 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.192 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.192 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:36.192 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.192 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 774242 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.450 00:07:36.450 real 0m0.243s 00:07:36.450 user 0m0.024s 00:07:36.450 sys 0m0.124s 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:36.450 ************************************ 00:07:36.450 END TEST filesystem_in_capsule_btrfs 00:07:36.450 ************************************ 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.450 ************************************ 00:07:36.450 START TEST filesystem_in_capsule_xfs 00:07:36.450 ************************************ 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:36.450 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:36.450 = sectsz=512 attr=2, projid32bit=1 00:07:36.450 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:36.450 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:36.450 data = bsize=4096 blocks=130560, imaxpct=25 00:07:36.450 = sunit=0 swidth=0 blks 00:07:36.450 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:36.450 log =internal log bsize=4096 blocks=16384, version=2 00:07:36.450 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:36.450 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:36.450 Discarding blocks...Done. 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.450 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 774242 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.708 00:07:36.708 real 0m0.191s 00:07:36.708 user 0m0.029s 00:07:36.708 sys 0m0.058s 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:36.708 ************************************ 00:07:36.708 END TEST filesystem_in_capsule_xfs 00:07:36.708 ************************************ 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:36.708 22:58:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.641 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 774242 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 774242 ']' 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 774242 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 774242 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 774242' 00:07:37.642 killing process with pid 774242 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 774242 00:07:37.642 22:58:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 774242 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:38.209 00:07:38.209 real 0m7.863s 00:07:38.209 user 0m30.615s 00:07:38.209 sys 0m1.056s 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 ************************************ 00:07:38.209 END TEST nvmf_filesystem_in_capsule 00:07:38.209 ************************************ 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:38.209 rmmod nvme_rdma 00:07:38.209 rmmod nvme_fabrics 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:38.209 00:07:38.209 real 0m21.547s 00:07:38.209 user 1m2.760s 00:07:38.209 sys 0m6.258s 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.209 22:58:30 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 ************************************ 00:07:38.209 END TEST nvmf_filesystem 00:07:38.209 ************************************ 00:07:38.209 22:58:30 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:38.209 22:58:30 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:38.209 22:58:30 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.209 22:58:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 ************************************ 00:07:38.209 START TEST nvmf_target_discovery 00:07:38.209 ************************************ 00:07:38.209 22:58:30 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:38.467 * Looking for test storage... 00:07:38.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:38.467 22:58:30 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.467 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:38.467 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:38.468 22:58:30 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:43.738 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:43.738 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:43.738 22:58:35 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:43.738 Found net devices under 0000:da:00.0: mlx_0_0 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.738 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:43.739 Found net devices under 0000:da:00.1: mlx_0_1 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:43.739 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:43.998 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:43.998 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:43.998 altname enp218s0f0np0 00:07:43.998 altname ens818f0np0 00:07:43.998 inet 192.168.100.8/24 scope global mlx_0_0 00:07:43.998 valid_lft forever preferred_lft forever 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:43.998 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:43.998 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:43.998 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:43.998 altname enp218s0f1np1 00:07:43.998 altname ens818f1np1 00:07:43.999 inet 192.168.100.9/24 scope global mlx_0_1 00:07:43.999 valid_lft forever preferred_lft forever 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:43.999 192.168.100.9' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:43.999 192.168.100.9' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:43.999 192.168.100.9' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=779138 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 779138 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 779138 ']' 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:43.999 22:58:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.999 [2024-06-07 22:58:36.245923] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:43.999 [2024-06-07 22:58:36.245972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.999 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.258 [2024-06-07 22:58:36.306022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.258 [2024-06-07 22:58:36.386831] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.258 [2024-06-07 22:58:36.386865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.258 [2024-06-07 22:58:36.386872] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.258 [2024-06-07 22:58:36.386878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.258 [2024-06-07 22:58:36.386883] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.258 [2024-06-07 22:58:36.386924] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.258 [2024-06-07 22:58:36.387024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.258 [2024-06-07 22:58:36.387092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.258 [2024-06-07 22:58:36.387093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.824 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 [2024-06-07 22:58:37.113096] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bd79d0/0x1bdbec0) succeed. 00:07:45.083 [2024-06-07 22:58:37.122433] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bd9010/0x1c1d550) succeed. 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 Null1 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 [2024-06-07 22:58:37.283299] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 Null2 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.083 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 Null3 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.084 Null4 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.084 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.343 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:07:45.343 00:07:45.343 Discovery Log Number of Records 6, Generation counter 6 00:07:45.343 =====Discovery Log Entry 0====== 00:07:45.343 trtype: rdma 00:07:45.343 adrfam: ipv4 00:07:45.343 subtype: current discovery subsystem 00:07:45.343 treq: not required 00:07:45.343 portid: 0 00:07:45.343 trsvcid: 4420 00:07:45.344 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:45.344 traddr: 192.168.100.8 00:07:45.344 eflags: explicit discovery connections, duplicate discovery information 00:07:45.344 rdma_prtype: not specified 00:07:45.344 rdma_qptype: connected 00:07:45.344 rdma_cms: rdma-cm 00:07:45.344 rdma_pkey: 0x0000 00:07:45.344 =====Discovery Log Entry 1====== 00:07:45.344 trtype: rdma 00:07:45.344 adrfam: ipv4 00:07:45.344 subtype: nvme subsystem 00:07:45.344 treq: not required 00:07:45.344 portid: 0 00:07:45.344 trsvcid: 4420 00:07:45.344 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:45.344 traddr: 192.168.100.8 00:07:45.344 eflags: none 00:07:45.344 rdma_prtype: not specified 00:07:45.344 rdma_qptype: connected 00:07:45.344 rdma_cms: rdma-cm 00:07:45.344 rdma_pkey: 0x0000 00:07:45.344 =====Discovery Log Entry 2====== 00:07:45.344 trtype: rdma 00:07:45.344 adrfam: ipv4 00:07:45.344 subtype: nvme subsystem 00:07:45.344 treq: not required 00:07:45.344 portid: 0 00:07:45.344 trsvcid: 4420 00:07:45.344 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:45.344 traddr: 192.168.100.8 00:07:45.344 eflags: none 00:07:45.344 rdma_prtype: not specified 00:07:45.344 rdma_qptype: connected 00:07:45.344 rdma_cms: rdma-cm 00:07:45.344 rdma_pkey: 0x0000 00:07:45.344 =====Discovery Log Entry 3====== 00:07:45.344 trtype: rdma 00:07:45.344 adrfam: ipv4 00:07:45.344 subtype: nvme subsystem 00:07:45.344 treq: not required 00:07:45.344 portid: 0 00:07:45.344 trsvcid: 4420 00:07:45.344 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:45.344 traddr: 192.168.100.8 00:07:45.344 eflags: none 00:07:45.344 rdma_prtype: not specified 00:07:45.344 rdma_qptype: connected 00:07:45.344 rdma_cms: rdma-cm 00:07:45.344 rdma_pkey: 0x0000 00:07:45.344 =====Discovery Log Entry 4====== 00:07:45.344 trtype: rdma 00:07:45.344 adrfam: ipv4 00:07:45.344 subtype: nvme subsystem 00:07:45.344 treq: not required 00:07:45.344 portid: 0 00:07:45.344 trsvcid: 4420 00:07:45.344 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:45.344 traddr: 192.168.100.8 00:07:45.344 eflags: none 00:07:45.344 rdma_prtype: not specified 00:07:45.344 rdma_qptype: connected 00:07:45.344 rdma_cms: rdma-cm 00:07:45.344 rdma_pkey: 0x0000 00:07:45.344 =====Discovery Log Entry 5====== 00:07:45.344 trtype: rdma 00:07:45.344 adrfam: ipv4 00:07:45.344 subtype: discovery subsystem referral 00:07:45.344 treq: not required 00:07:45.344 portid: 0 00:07:45.344 trsvcid: 4430 00:07:45.344 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:45.344 traddr: 192.168.100.8 00:07:45.344 eflags: none 00:07:45.344 rdma_prtype: unrecognized 00:07:45.344 rdma_qptype: unrecognized 00:07:45.344 rdma_cms: unrecognized 00:07:45.344 rdma_pkey: 0x0000 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:45.344 Perform nvmf subsystem discovery via RPC 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 [ 00:07:45.344 { 00:07:45.344 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:45.344 "subtype": "Discovery", 00:07:45.344 "listen_addresses": [ 00:07:45.344 { 00:07:45.344 "trtype": "RDMA", 00:07:45.344 "adrfam": "IPv4", 00:07:45.344 "traddr": "192.168.100.8", 00:07:45.344 "trsvcid": "4420" 00:07:45.344 } 00:07:45.344 ], 00:07:45.344 "allow_any_host": true, 00:07:45.344 "hosts": [] 00:07:45.344 }, 00:07:45.344 { 00:07:45.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.344 "subtype": "NVMe", 00:07:45.344 "listen_addresses": [ 00:07:45.344 { 00:07:45.344 "trtype": "RDMA", 00:07:45.344 "adrfam": "IPv4", 00:07:45.344 "traddr": "192.168.100.8", 00:07:45.344 "trsvcid": "4420" 00:07:45.344 } 00:07:45.344 ], 00:07:45.344 "allow_any_host": true, 00:07:45.344 "hosts": [], 00:07:45.344 "serial_number": "SPDK00000000000001", 00:07:45.344 "model_number": "SPDK bdev Controller", 00:07:45.344 "max_namespaces": 32, 00:07:45.344 "min_cntlid": 1, 00:07:45.344 "max_cntlid": 65519, 00:07:45.344 "namespaces": [ 00:07:45.344 { 00:07:45.344 "nsid": 1, 00:07:45.344 "bdev_name": "Null1", 00:07:45.344 "name": "Null1", 00:07:45.344 "nguid": "EB4D3473A6F740C7AEBD1BBBAA8BE8D0", 00:07:45.344 "uuid": "eb4d3473-a6f7-40c7-aebd-1bbbaa8be8d0" 00:07:45.344 } 00:07:45.344 ] 00:07:45.344 }, 00:07:45.344 { 00:07:45.344 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:45.344 "subtype": "NVMe", 00:07:45.344 "listen_addresses": [ 00:07:45.344 { 00:07:45.344 "trtype": "RDMA", 00:07:45.344 "adrfam": "IPv4", 00:07:45.344 "traddr": "192.168.100.8", 00:07:45.344 "trsvcid": "4420" 00:07:45.344 } 00:07:45.344 ], 00:07:45.344 "allow_any_host": true, 00:07:45.344 "hosts": [], 00:07:45.344 "serial_number": "SPDK00000000000002", 00:07:45.344 "model_number": "SPDK bdev Controller", 00:07:45.344 "max_namespaces": 32, 00:07:45.344 "min_cntlid": 1, 00:07:45.344 "max_cntlid": 65519, 00:07:45.344 "namespaces": [ 00:07:45.344 { 00:07:45.344 "nsid": 1, 00:07:45.344 "bdev_name": "Null2", 00:07:45.344 "name": "Null2", 00:07:45.344 "nguid": "17A885C6CE1248E1A9CE4209272510DE", 00:07:45.344 "uuid": "17a885c6-ce12-48e1-a9ce-4209272510de" 00:07:45.344 } 00:07:45.344 ] 00:07:45.344 }, 00:07:45.344 { 00:07:45.344 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:45.344 "subtype": "NVMe", 00:07:45.344 "listen_addresses": [ 00:07:45.344 { 00:07:45.344 "trtype": "RDMA", 00:07:45.344 "adrfam": "IPv4", 00:07:45.344 "traddr": "192.168.100.8", 00:07:45.344 "trsvcid": "4420" 00:07:45.344 } 00:07:45.344 ], 00:07:45.344 "allow_any_host": true, 00:07:45.344 "hosts": [], 00:07:45.344 "serial_number": "SPDK00000000000003", 00:07:45.344 "model_number": "SPDK bdev Controller", 00:07:45.344 "max_namespaces": 32, 00:07:45.344 "min_cntlid": 1, 00:07:45.344 "max_cntlid": 65519, 00:07:45.344 "namespaces": [ 00:07:45.344 { 00:07:45.344 "nsid": 1, 00:07:45.344 "bdev_name": "Null3", 00:07:45.344 "name": "Null3", 00:07:45.344 "nguid": "800A8E46370C4E3191028EF081B410B1", 00:07:45.344 "uuid": "800a8e46-370c-4e31-9102-8ef081b410b1" 00:07:45.344 } 00:07:45.344 ] 00:07:45.344 }, 00:07:45.344 { 00:07:45.344 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:45.344 "subtype": "NVMe", 00:07:45.344 "listen_addresses": [ 00:07:45.344 { 00:07:45.344 "trtype": "RDMA", 00:07:45.344 "adrfam": "IPv4", 00:07:45.344 "traddr": "192.168.100.8", 00:07:45.344 "trsvcid": "4420" 00:07:45.344 } 00:07:45.344 ], 00:07:45.344 "allow_any_host": true, 00:07:45.344 "hosts": [], 00:07:45.344 "serial_number": "SPDK00000000000004", 00:07:45.344 "model_number": "SPDK bdev Controller", 00:07:45.344 "max_namespaces": 32, 00:07:45.344 "min_cntlid": 1, 00:07:45.344 "max_cntlid": 65519, 00:07:45.344 "namespaces": [ 00:07:45.344 { 00:07:45.344 "nsid": 1, 00:07:45.344 "bdev_name": "Null4", 00:07:45.344 "name": "Null4", 00:07:45.344 "nguid": "9575FECD28BB456F9511DE470079C34C", 00:07:45.344 "uuid": "9575fecd-28bb-456f-9511-de470079c34c" 00:07:45.344 } 00:07:45.344 ] 00:07:45.344 } 00:07:45.344 ] 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.344 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.345 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:45.604 rmmod nvme_rdma 00:07:45.604 rmmod nvme_fabrics 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 779138 ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 779138 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 779138 ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 779138 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 779138 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 779138' 00:07:45.604 killing process with pid 779138 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 779138 00:07:45.604 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 779138 00:07:45.864 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.864 22:58:37 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:45.864 00:07:45.864 real 0m7.555s 00:07:45.864 user 0m7.967s 00:07:45.864 sys 0m4.601s 00:07:45.864 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:45.864 22:58:37 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.864 ************************************ 00:07:45.864 END TEST nvmf_target_discovery 00:07:45.864 ************************************ 00:07:45.864 22:58:38 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:45.864 22:58:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:45.864 22:58:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.864 22:58:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:45.864 ************************************ 00:07:45.864 START TEST nvmf_referrals 00:07:45.864 ************************************ 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:45.864 * Looking for test storage... 00:07:45.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:45.864 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.865 22:58:38 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.433 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:52.434 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:52.434 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:52.434 Found net devices under 0000:da:00.0: mlx_0_0 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:52.434 Found net devices under 0000:da:00.1: mlx_0_1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:52.434 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.434 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:52.434 altname enp218s0f0np0 00:07:52.434 altname ens818f0np0 00:07:52.434 inet 192.168.100.8/24 scope global mlx_0_0 00:07:52.434 valid_lft forever preferred_lft forever 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:52.434 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:52.434 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.434 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:52.434 altname enp218s0f1np1 00:07:52.434 altname ens818f1np1 00:07:52.434 inet 192.168.100.9/24 scope global mlx_0_1 00:07:52.435 valid_lft forever preferred_lft forever 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:52.435 192.168.100.9' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:52.435 192.168.100.9' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:52.435 192.168.100.9' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:52.435 22:58:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=782968 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 782968 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 782968 ']' 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:52.435 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.435 [2024-06-07 22:58:44.061862] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:07:52.435 [2024-06-07 22:58:44.061912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.435 [2024-06-07 22:58:44.127538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.435 [2024-06-07 22:58:44.203304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.435 [2024-06-07 22:58:44.203345] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.435 [2024-06-07 22:58:44.203352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.435 [2024-06-07 22:58:44.203361] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.435 [2024-06-07 22:58:44.203366] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.435 [2024-06-07 22:58:44.203660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.435 [2024-06-07 22:58:44.203741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.435 [2024-06-07 22:58:44.203832] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.435 [2024-06-07 22:58:44.203843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.759 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:52.759 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:07:52.759 22:58:44 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.759 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:52.759 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.760 22:58:44 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.760 22:58:44 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:52.760 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.760 22:58:44 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.760 [2024-06-07 22:58:44.938307] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eae9d0/0x1eb2ec0) succeed. 00:07:52.760 [2024-06-07 22:58:44.947437] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eb0010/0x1ef4550) succeed. 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 [2024-06-07 22:58:45.071439] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.019 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.278 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.537 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:53.796 22:58:45 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.055 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:54.056 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:54.056 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.056 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.056 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:54.056 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.056 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:54.315 rmmod nvme_rdma 00:07:54.315 rmmod nvme_fabrics 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 782968 ']' 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 782968 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 782968 ']' 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 782968 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 782968 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 782968' 00:07:54.315 killing process with pid 782968 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 782968 00:07:54.315 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 782968 00:07:54.574 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.574 22:58:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:54.574 00:07:54.574 real 0m8.679s 00:07:54.574 user 0m12.380s 00:07:54.574 sys 0m5.152s 00:07:54.574 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:54.574 22:58:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.574 ************************************ 00:07:54.574 END TEST nvmf_referrals 00:07:54.574 ************************************ 00:07:54.574 22:58:46 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:54.574 22:58:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:54.574 22:58:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:54.574 22:58:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:54.574 ************************************ 00:07:54.574 START TEST nvmf_connect_disconnect 00:07:54.574 ************************************ 00:07:54.574 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:54.832 * Looking for test storage... 00:07:54.832 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:54.832 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.832 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:54.832 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.832 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.833 22:58:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:01.399 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:01.399 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.399 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:01.399 Found net devices under 0000:da:00.0: mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:01.400 Found net devices under 0000:da:00.1: mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:01.400 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:01.400 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:01.400 altname enp218s0f0np0 00:08:01.400 altname ens818f0np0 00:08:01.400 inet 192.168.100.8/24 scope global mlx_0_0 00:08:01.400 valid_lft forever preferred_lft forever 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:01.400 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:01.400 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:01.400 altname enp218s0f1np1 00:08:01.400 altname ens818f1np1 00:08:01.400 inet 192.168.100.9/24 scope global mlx_0_1 00:08:01.400 valid_lft forever preferred_lft forever 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:01.400 192.168.100.9' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:01.400 192.168.100.9' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:01.400 192.168.100.9' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=786892 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 786892 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 786892 ']' 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:01.400 22:58:52 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 [2024-06-07 22:58:52.790470] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:01.400 [2024-06-07 22:58:52.790517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.400 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.400 [2024-06-07 22:58:52.853925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.400 [2024-06-07 22:58:52.932548] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.400 [2024-06-07 22:58:52.932583] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.400 [2024-06-07 22:58:52.932589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.400 [2024-06-07 22:58:52.932595] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.400 [2024-06-07 22:58:52.932599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.400 [2024-06-07 22:58:52.932660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.400 [2024-06-07 22:58:52.932766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.400 [2024-06-07 22:58:52.932855] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.400 [2024-06-07 22:58:52.932857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.400 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 [2024-06-07 22:58:53.623037] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:01.400 [2024-06-07 22:58:53.643238] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f619d0/0x1f65ec0) succeed. 00:08:01.400 [2024-06-07 22:58:53.652347] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f63010/0x1fa7550) succeed. 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 [2024-06-07 22:58:53.793303] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:01.659 22:58:53 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:05.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:21.702 rmmod nvme_rdma 00:08:21.702 rmmod nvme_fabrics 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 786892 ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 786892 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 786892 ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 786892 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 786892 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 786892' 00:08:21.702 killing process with pid 786892 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 786892 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 786892 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:21.702 00:08:21.702 real 0m27.065s 00:08:21.702 user 1m24.821s 00:08:21.702 sys 0m5.283s 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:21.702 22:59:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.702 ************************************ 00:08:21.702 END TEST nvmf_connect_disconnect 00:08:21.702 ************************************ 00:08:21.702 22:59:13 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:21.702 22:59:13 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:21.702 22:59:13 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:21.702 22:59:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:21.702 ************************************ 00:08:21.702 START TEST nvmf_multitarget 00:08:21.702 ************************************ 00:08:21.702 22:59:13 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:21.961 * Looking for test storage... 00:08:21.961 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.961 22:59:14 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.962 22:59:14 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.557 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:28.558 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:28.558 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:28.558 Found net devices under 0000:da:00.0: mlx_0_0 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:28.558 Found net devices under 0000:da:00.1: mlx_0_1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:28.558 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.558 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:28.558 altname enp218s0f0np0 00:08:28.558 altname ens818f0np0 00:08:28.558 inet 192.168.100.8/24 scope global mlx_0_0 00:08:28.558 valid_lft forever preferred_lft forever 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:28.558 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:28.558 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:28.558 altname enp218s0f1np1 00:08:28.558 altname ens818f1np1 00:08:28.558 inet 192.168.100.9/24 scope global mlx_0_1 00:08:28.558 valid_lft forever preferred_lft forever 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:28.558 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:28.559 192.168.100.9' 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:28.559 192.168.100.9' 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:28.559 192.168.100.9' 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:08:28.559 22:59:19 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=794021 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 794021 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 794021 ']' 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:28.559 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:28.559 [2024-06-07 22:59:20.069554] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:28.559 [2024-06-07 22:59:20.069600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.559 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.559 [2024-06-07 22:59:20.130235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.559 [2024-06-07 22:59:20.202285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.559 [2024-06-07 22:59:20.202326] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.559 [2024-06-07 22:59:20.202332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.559 [2024-06-07 22:59:20.202338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.559 [2024-06-07 22:59:20.202342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.559 [2024-06-07 22:59:20.202412] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.559 [2024-06-07 22:59:20.202506] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.559 [2024-06-07 22:59:20.202598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.559 [2024-06-07 22:59:20.202599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:28.817 22:59:20 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:28.817 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:28.817 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:29.076 "nvmf_tgt_1" 00:08:29.076 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:29.076 "nvmf_tgt_2" 00:08:29.076 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:29.076 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:29.076 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:29.076 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:29.334 true 00:08:29.334 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:29.334 true 00:08:29.334 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:29.334 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:29.594 rmmod nvme_rdma 00:08:29.594 rmmod nvme_fabrics 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 794021 ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 794021 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 794021 ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 794021 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 794021 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 794021' 00:08:29.594 killing process with pid 794021 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 794021 00:08:29.594 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 794021 00:08:29.853 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.853 22:59:21 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:29.853 00:08:29.853 real 0m7.978s 00:08:29.853 user 0m9.249s 00:08:29.853 sys 0m4.921s 00:08:29.853 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.853 22:59:21 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:29.853 ************************************ 00:08:29.853 END TEST nvmf_multitarget 00:08:29.853 ************************************ 00:08:29.853 22:59:21 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:29.853 22:59:21 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:29.853 22:59:21 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:29.853 22:59:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:29.853 ************************************ 00:08:29.853 START TEST nvmf_rpc 00:08:29.853 ************************************ 00:08:29.853 22:59:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:29.853 * Looking for test storage... 00:08:29.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.853 22:59:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:36.419 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:36.419 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:36.419 Found net devices under 0000:da:00.0: mlx_0_0 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:36.419 Found net devices under 0000:da:00.1: mlx_0_1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:36.419 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:36.419 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:36.419 altname enp218s0f0np0 00:08:36.419 altname ens818f0np0 00:08:36.419 inet 192.168.100.8/24 scope global mlx_0_0 00:08:36.419 valid_lft forever preferred_lft forever 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:36.419 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:36.420 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:36.420 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:36.420 altname enp218s0f1np1 00:08:36.420 altname ens818f1np1 00:08:36.420 inet 192.168.100.9/24 scope global mlx_0_1 00:08:36.420 valid_lft forever preferred_lft forever 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:36.420 192.168.100.9' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:36.420 192.168.100.9' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:36.420 192.168.100.9' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=797775 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 797775 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 797775 ']' 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:36.420 22:59:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 [2024-06-07 22:59:27.650511] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:08:36.420 [2024-06-07 22:59:27.650554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.420 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.420 [2024-06-07 22:59:27.711753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.420 [2024-06-07 22:59:27.791241] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.420 [2024-06-07 22:59:27.791279] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.420 [2024-06-07 22:59:27.791286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.420 [2024-06-07 22:59:27.791292] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.420 [2024-06-07 22:59:27.791297] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.420 [2024-06-07 22:59:27.791348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.420 [2024-06-07 22:59:27.791366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.420 [2024-06-07 22:59:27.791457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.420 [2024-06-07 22:59:27.791457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:36.420 "tick_rate": 2100000000, 00:08:36.420 "poll_groups": [ 00:08:36.420 { 00:08:36.420 "name": "nvmf_tgt_poll_group_000", 00:08:36.420 "admin_qpairs": 0, 00:08:36.420 "io_qpairs": 0, 00:08:36.420 "current_admin_qpairs": 0, 00:08:36.420 "current_io_qpairs": 0, 00:08:36.420 "pending_bdev_io": 0, 00:08:36.420 "completed_nvme_io": 0, 00:08:36.420 "transports": [] 00:08:36.420 }, 00:08:36.420 { 00:08:36.420 "name": "nvmf_tgt_poll_group_001", 00:08:36.420 "admin_qpairs": 0, 00:08:36.420 "io_qpairs": 0, 00:08:36.420 "current_admin_qpairs": 0, 00:08:36.420 "current_io_qpairs": 0, 00:08:36.420 "pending_bdev_io": 0, 00:08:36.420 "completed_nvme_io": 0, 00:08:36.420 "transports": [] 00:08:36.420 }, 00:08:36.420 { 00:08:36.420 "name": "nvmf_tgt_poll_group_002", 00:08:36.420 "admin_qpairs": 0, 00:08:36.420 "io_qpairs": 0, 00:08:36.420 "current_admin_qpairs": 0, 00:08:36.420 "current_io_qpairs": 0, 00:08:36.420 "pending_bdev_io": 0, 00:08:36.420 "completed_nvme_io": 0, 00:08:36.420 "transports": [] 00:08:36.420 }, 00:08:36.420 { 00:08:36.420 "name": "nvmf_tgt_poll_group_003", 00:08:36.420 "admin_qpairs": 0, 00:08:36.420 "io_qpairs": 0, 00:08:36.420 "current_admin_qpairs": 0, 00:08:36.420 "current_io_qpairs": 0, 00:08:36.420 "pending_bdev_io": 0, 00:08:36.420 "completed_nvme_io": 0, 00:08:36.420 "transports": [] 00:08:36.420 } 00:08:36.420 ] 00:08:36.420 }' 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.420 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.420 [2024-06-07 22:59:28.620624] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19769e0/0x197aed0) succeed. 00:08:36.420 [2024-06-07 22:59:28.629747] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1978020/0x19bc560) succeed. 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:36.680 "tick_rate": 2100000000, 00:08:36.680 "poll_groups": [ 00:08:36.680 { 00:08:36.680 "name": "nvmf_tgt_poll_group_000", 00:08:36.680 "admin_qpairs": 0, 00:08:36.680 "io_qpairs": 0, 00:08:36.680 "current_admin_qpairs": 0, 00:08:36.680 "current_io_qpairs": 0, 00:08:36.680 "pending_bdev_io": 0, 00:08:36.680 "completed_nvme_io": 0, 00:08:36.680 "transports": [ 00:08:36.680 { 00:08:36.680 "trtype": "RDMA", 00:08:36.680 "pending_data_buffer": 0, 00:08:36.680 "devices": [ 00:08:36.680 { 00:08:36.680 "name": "mlx5_0", 00:08:36.680 "polls": 15318, 00:08:36.680 "idle_polls": 15318, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "mlx5_1", 00:08:36.680 "polls": 15318, 00:08:36.680 "idle_polls": 15318, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "nvmf_tgt_poll_group_001", 00:08:36.680 "admin_qpairs": 0, 00:08:36.680 "io_qpairs": 0, 00:08:36.680 "current_admin_qpairs": 0, 00:08:36.680 "current_io_qpairs": 0, 00:08:36.680 "pending_bdev_io": 0, 00:08:36.680 "completed_nvme_io": 0, 00:08:36.680 "transports": [ 00:08:36.680 { 00:08:36.680 "trtype": "RDMA", 00:08:36.680 "pending_data_buffer": 0, 00:08:36.680 "devices": [ 00:08:36.680 { 00:08:36.680 "name": "mlx5_0", 00:08:36.680 "polls": 9970, 00:08:36.680 "idle_polls": 9970, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "mlx5_1", 00:08:36.680 "polls": 9970, 00:08:36.680 "idle_polls": 9970, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "nvmf_tgt_poll_group_002", 00:08:36.680 "admin_qpairs": 0, 00:08:36.680 "io_qpairs": 0, 00:08:36.680 "current_admin_qpairs": 0, 00:08:36.680 "current_io_qpairs": 0, 00:08:36.680 "pending_bdev_io": 0, 00:08:36.680 "completed_nvme_io": 0, 00:08:36.680 "transports": [ 00:08:36.680 { 00:08:36.680 "trtype": "RDMA", 00:08:36.680 "pending_data_buffer": 0, 00:08:36.680 "devices": [ 00:08:36.680 { 00:08:36.680 "name": "mlx5_0", 00:08:36.680 "polls": 5281, 00:08:36.680 "idle_polls": 5281, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "mlx5_1", 00:08:36.680 "polls": 5281, 00:08:36.680 "idle_polls": 5281, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "nvmf_tgt_poll_group_003", 00:08:36.680 "admin_qpairs": 0, 00:08:36.680 "io_qpairs": 0, 00:08:36.680 "current_admin_qpairs": 0, 00:08:36.680 "current_io_qpairs": 0, 00:08:36.680 "pending_bdev_io": 0, 00:08:36.680 "completed_nvme_io": 0, 00:08:36.680 "transports": [ 00:08:36.680 { 00:08:36.680 "trtype": "RDMA", 00:08:36.680 "pending_data_buffer": 0, 00:08:36.680 "devices": [ 00:08:36.680 { 00:08:36.680 "name": "mlx5_0", 00:08:36.680 "polls": 881, 00:08:36.680 "idle_polls": 881, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 }, 00:08:36.680 { 00:08:36.680 "name": "mlx5_1", 00:08:36.680 "polls": 881, 00:08:36.680 "idle_polls": 881, 00:08:36.680 "completions": 0, 00:08:36.680 "requests": 0, 00:08:36.680 "request_latency": 0, 00:08:36.680 "pending_free_request": 0, 00:08:36.680 "pending_rdma_read": 0, 00:08:36.680 "pending_rdma_write": 0, 00:08:36.680 "pending_rdma_send": 0, 00:08:36.680 "total_send_wrs": 0, 00:08:36.680 "send_doorbell_updates": 0, 00:08:36.680 "total_recv_wrs": 4096, 00:08:36.680 "recv_doorbell_updates": 1 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 } 00:08:36.680 ] 00:08:36.680 }' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:08:36.680 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:08:36.681 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:36.940 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:08:36.940 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:36.940 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:36.940 22:59:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:36.940 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.940 22:59:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.940 Malloc1 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.940 [2024-06-07 22:59:29.045634] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:36.940 [2024-06-07 22:59:29.091506] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:08:36.940 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:36.940 could not add new controller: failed to write to nvme-fabrics device 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:36.940 22:59:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:37.875 22:59:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.875 22:59:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:37.875 22:59:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.875 22:59:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:37.875 22:59:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:40.405 22:59:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:41.091 [2024-06-07 22:59:33.133250] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:08:41.091 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:41.091 could not add new controller: failed to write to nvme-fabrics device 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:41.091 22:59:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:42.026 22:59:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.026 22:59:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:42.026 22:59:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.026 22:59:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:42.026 22:59:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:43.929 22:59:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:44.865 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.124 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.125 [2024-06-07 22:59:37.176597] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.125 22:59:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:46.061 22:59:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.061 22:59:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:46.061 22:59:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.061 22:59:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:46.061 22:59:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:47.962 22:59:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:48.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.898 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 [2024-06-07 22:59:41.193314] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:49.156 22:59:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:50.091 22:59:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.091 22:59:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:50.091 22:59:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.091 22:59:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:50.091 22:59:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:51.992 22:59:44 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:52.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.927 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.185 [2024-06-07 22:59:45.206437] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.185 22:59:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:54.120 22:59:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.120 22:59:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:54.120 22:59:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.120 22:59:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:54.120 22:59:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:56.022 22:59:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 [2024-06-07 22:59:49.209824] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.957 22:59:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:58.335 22:59:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.335 22:59:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:58.335 22:59:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.335 22:59:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:58.335 22:59:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:00.236 22:59:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:00.237 22:59:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:00.237 22:59:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.237 22:59:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:00.237 22:59:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.237 22:59:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:00.237 22:59:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.170 [2024-06-07 22:59:53.226115] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:01.170 22:59:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:02.105 22:59:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.105 22:59:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:02.106 22:59:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.106 22:59:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:02.106 22:59:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:04.007 22:59:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.941 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 [2024-06-07 22:59:57.248637] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.199 [2024-06-07 22:59:57.296763] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.199 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 [2024-06-07 22:59:57.348972] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 [2024-06-07 22:59:57.397144] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 [2024-06-07 22:59:57.445321] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.200 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.459 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.459 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:05.459 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:05.459 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.459 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:05.459 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:05.459 "tick_rate": 2100000000, 00:09:05.459 "poll_groups": [ 00:09:05.459 { 00:09:05.459 "name": "nvmf_tgt_poll_group_000", 00:09:05.459 "admin_qpairs": 2, 00:09:05.459 "io_qpairs": 27, 00:09:05.459 "current_admin_qpairs": 0, 00:09:05.459 "current_io_qpairs": 0, 00:09:05.459 "pending_bdev_io": 0, 00:09:05.459 "completed_nvme_io": 127, 00:09:05.459 "transports": [ 00:09:05.459 { 00:09:05.459 "trtype": "RDMA", 00:09:05.459 "pending_data_buffer": 0, 00:09:05.459 "devices": [ 00:09:05.459 { 00:09:05.459 "name": "mlx5_0", 00:09:05.459 "polls": 3422834, 00:09:05.459 "idle_polls": 3422515, 00:09:05.459 "completions": 363, 00:09:05.459 "requests": 181, 00:09:05.459 "request_latency": 31514450, 00:09:05.459 "pending_free_request": 0, 00:09:05.459 "pending_rdma_read": 0, 00:09:05.459 "pending_rdma_write": 0, 00:09:05.459 "pending_rdma_send": 0, 00:09:05.459 "total_send_wrs": 307, 00:09:05.459 "send_doorbell_updates": 157, 00:09:05.459 "total_recv_wrs": 4277, 00:09:05.459 "recv_doorbell_updates": 157 00:09:05.459 }, 00:09:05.459 { 00:09:05.459 "name": "mlx5_1", 00:09:05.459 "polls": 3422834, 00:09:05.459 "idle_polls": 3422834, 00:09:05.459 "completions": 0, 00:09:05.459 "requests": 0, 00:09:05.459 "request_latency": 0, 00:09:05.459 "pending_free_request": 0, 00:09:05.459 "pending_rdma_read": 0, 00:09:05.459 "pending_rdma_write": 0, 00:09:05.459 "pending_rdma_send": 0, 00:09:05.459 "total_send_wrs": 0, 00:09:05.459 "send_doorbell_updates": 0, 00:09:05.459 "total_recv_wrs": 4096, 00:09:05.459 "recv_doorbell_updates": 1 00:09:05.459 } 00:09:05.459 ] 00:09:05.459 } 00:09:05.459 ] 00:09:05.459 }, 00:09:05.459 { 00:09:05.459 "name": "nvmf_tgt_poll_group_001", 00:09:05.459 "admin_qpairs": 2, 00:09:05.459 "io_qpairs": 26, 00:09:05.459 "current_admin_qpairs": 0, 00:09:05.459 "current_io_qpairs": 0, 00:09:05.459 "pending_bdev_io": 0, 00:09:05.459 "completed_nvme_io": 125, 00:09:05.459 "transports": [ 00:09:05.459 { 00:09:05.459 "trtype": "RDMA", 00:09:05.459 "pending_data_buffer": 0, 00:09:05.459 "devices": [ 00:09:05.459 { 00:09:05.459 "name": "mlx5_0", 00:09:05.459 "polls": 3466485, 00:09:05.459 "idle_polls": 3466166, 00:09:05.459 "completions": 358, 00:09:05.459 "requests": 179, 00:09:05.459 "request_latency": 31278242, 00:09:05.459 "pending_free_request": 0, 00:09:05.459 "pending_rdma_read": 0, 00:09:05.459 "pending_rdma_write": 0, 00:09:05.459 "pending_rdma_send": 0, 00:09:05.459 "total_send_wrs": 304, 00:09:05.459 "send_doorbell_updates": 154, 00:09:05.459 "total_recv_wrs": 4275, 00:09:05.459 "recv_doorbell_updates": 155 00:09:05.459 }, 00:09:05.459 { 00:09:05.459 "name": "mlx5_1", 00:09:05.459 "polls": 3466485, 00:09:05.459 "idle_polls": 3466485, 00:09:05.459 "completions": 0, 00:09:05.459 "requests": 0, 00:09:05.459 "request_latency": 0, 00:09:05.459 "pending_free_request": 0, 00:09:05.459 "pending_rdma_read": 0, 00:09:05.459 "pending_rdma_write": 0, 00:09:05.459 "pending_rdma_send": 0, 00:09:05.459 "total_send_wrs": 0, 00:09:05.459 "send_doorbell_updates": 0, 00:09:05.459 "total_recv_wrs": 4096, 00:09:05.459 "recv_doorbell_updates": 1 00:09:05.459 } 00:09:05.459 ] 00:09:05.459 } 00:09:05.459 ] 00:09:05.459 }, 00:09:05.459 { 00:09:05.459 "name": "nvmf_tgt_poll_group_002", 00:09:05.459 "admin_qpairs": 1, 00:09:05.459 "io_qpairs": 26, 00:09:05.459 "current_admin_qpairs": 0, 00:09:05.459 "current_io_qpairs": 0, 00:09:05.459 "pending_bdev_io": 0, 00:09:05.459 "completed_nvme_io": 77, 00:09:05.459 "transports": [ 00:09:05.459 { 00:09:05.459 "trtype": "RDMA", 00:09:05.459 "pending_data_buffer": 0, 00:09:05.460 "devices": [ 00:09:05.460 { 00:09:05.460 "name": "mlx5_0", 00:09:05.460 "polls": 3480871, 00:09:05.460 "idle_polls": 3480678, 00:09:05.460 "completions": 211, 00:09:05.460 "requests": 105, 00:09:05.460 "request_latency": 17079404, 00:09:05.460 "pending_free_request": 0, 00:09:05.460 "pending_rdma_read": 0, 00:09:05.460 "pending_rdma_write": 0, 00:09:05.460 "pending_rdma_send": 0, 00:09:05.460 "total_send_wrs": 170, 00:09:05.460 "send_doorbell_updates": 94, 00:09:05.460 "total_recv_wrs": 4201, 00:09:05.460 "recv_doorbell_updates": 94 00:09:05.460 }, 00:09:05.460 { 00:09:05.460 "name": "mlx5_1", 00:09:05.460 "polls": 3480871, 00:09:05.460 "idle_polls": 3480871, 00:09:05.460 "completions": 0, 00:09:05.460 "requests": 0, 00:09:05.460 "request_latency": 0, 00:09:05.460 "pending_free_request": 0, 00:09:05.460 "pending_rdma_read": 0, 00:09:05.460 "pending_rdma_write": 0, 00:09:05.460 "pending_rdma_send": 0, 00:09:05.460 "total_send_wrs": 0, 00:09:05.460 "send_doorbell_updates": 0, 00:09:05.460 "total_recv_wrs": 4096, 00:09:05.460 "recv_doorbell_updates": 1 00:09:05.460 } 00:09:05.460 ] 00:09:05.460 } 00:09:05.460 ] 00:09:05.460 }, 00:09:05.460 { 00:09:05.460 "name": "nvmf_tgt_poll_group_003", 00:09:05.460 "admin_qpairs": 2, 00:09:05.460 "io_qpairs": 26, 00:09:05.460 "current_admin_qpairs": 0, 00:09:05.460 "current_io_qpairs": 0, 00:09:05.460 "pending_bdev_io": 0, 00:09:05.460 "completed_nvme_io": 126, 00:09:05.460 "transports": [ 00:09:05.460 { 00:09:05.460 "trtype": "RDMA", 00:09:05.460 "pending_data_buffer": 0, 00:09:05.460 "devices": [ 00:09:05.460 { 00:09:05.460 "name": "mlx5_0", 00:09:05.460 "polls": 2704537, 00:09:05.460 "idle_polls": 2704223, 00:09:05.460 "completions": 358, 00:09:05.460 "requests": 179, 00:09:05.460 "request_latency": 32032748, 00:09:05.460 "pending_free_request": 0, 00:09:05.460 "pending_rdma_read": 0, 00:09:05.460 "pending_rdma_write": 0, 00:09:05.460 "pending_rdma_send": 0, 00:09:05.460 "total_send_wrs": 304, 00:09:05.460 "send_doorbell_updates": 154, 00:09:05.460 "total_recv_wrs": 4275, 00:09:05.460 "recv_doorbell_updates": 155 00:09:05.460 }, 00:09:05.460 { 00:09:05.460 "name": "mlx5_1", 00:09:05.460 "polls": 2704537, 00:09:05.460 "idle_polls": 2704537, 00:09:05.460 "completions": 0, 00:09:05.460 "requests": 0, 00:09:05.460 "request_latency": 0, 00:09:05.460 "pending_free_request": 0, 00:09:05.460 "pending_rdma_read": 0, 00:09:05.460 "pending_rdma_write": 0, 00:09:05.460 "pending_rdma_send": 0, 00:09:05.460 "total_send_wrs": 0, 00:09:05.460 "send_doorbell_updates": 0, 00:09:05.460 "total_recv_wrs": 4096, 00:09:05.460 "recv_doorbell_updates": 1 00:09:05.460 } 00:09:05.460 ] 00:09:05.460 } 00:09:05.460 ] 00:09:05.460 } 00:09:05.460 ] 00:09:05.460 }' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 111904844 > 0 )) 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:05.460 rmmod nvme_rdma 00:09:05.460 rmmod nvme_fabrics 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 797775 ']' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 797775 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 797775 ']' 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 797775 00:09:05.460 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 797775 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 797775' 00:09:05.721 killing process with pid 797775 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 797775 00:09:05.721 22:59:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 797775 00:09:05.980 22:59:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.980 22:59:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:05.980 00:09:05.980 real 0m36.093s 00:09:05.980 user 2m2.404s 00:09:05.980 sys 0m5.581s 00:09:05.980 22:59:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:05.980 22:59:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.980 ************************************ 00:09:05.980 END TEST nvmf_rpc 00:09:05.980 ************************************ 00:09:05.980 22:59:58 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:05.980 22:59:58 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:05.980 22:59:58 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:05.980 22:59:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:05.980 ************************************ 00:09:05.980 START TEST nvmf_invalid 00:09:05.980 ************************************ 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:05.980 * Looking for test storage... 00:09:05.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.980 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.981 22:59:58 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:12.550 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:12.550 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:12.551 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:12.551 Found net devices under 0000:da:00.0: mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:12.551 Found net devices under 0000:da:00.1: mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:12.551 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:12.551 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:12.551 altname enp218s0f0np0 00:09:12.551 altname ens818f0np0 00:09:12.551 inet 192.168.100.8/24 scope global mlx_0_0 00:09:12.551 valid_lft forever preferred_lft forever 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:12.551 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:12.551 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:12.551 altname enp218s0f1np1 00:09:12.551 altname ens818f1np1 00:09:12.551 inet 192.168.100.9/24 scope global mlx_0_1 00:09:12.551 valid_lft forever preferred_lft forever 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:12.551 192.168.100.9' 00:09:12.551 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:12.551 192.168.100.9' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:12.552 192.168.100.9' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=806580 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 806580 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 806580 ']' 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:12.552 23:00:04 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:12.552 [2024-06-07 23:00:04.242253] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:09:12.552 [2024-06-07 23:00:04.242295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.552 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.552 [2024-06-07 23:00:04.301605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.552 [2024-06-07 23:00:04.382014] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.552 [2024-06-07 23:00:04.382048] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.552 [2024-06-07 23:00:04.382054] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.552 [2024-06-07 23:00:04.382060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.552 [2024-06-07 23:00:04.382065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.552 [2024-06-07 23:00:04.382107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.552 [2024-06-07 23:00:04.382190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.552 [2024-06-07 23:00:04.382289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.552 [2024-06-07 23:00:04.382290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.810 23:00:05 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:12.810 23:00:05 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:09:12.810 23:00:05 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.811 23:00:05 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:12.811 23:00:05 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19492 00:09:13.070 [2024-06-07 23:00:05.269501] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:13.070 { 00:09:13.070 "nqn": "nqn.2016-06.io.spdk:cnode19492", 00:09:13.070 "tgt_name": "foobar", 00:09:13.070 "method": "nvmf_create_subsystem", 00:09:13.070 "req_id": 1 00:09:13.070 } 00:09:13.070 Got JSON-RPC error response 00:09:13.070 response: 00:09:13.070 { 00:09:13.070 "code": -32603, 00:09:13.070 "message": "Unable to find target foobar" 00:09:13.070 }' 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:13.070 { 00:09:13.070 "nqn": "nqn.2016-06.io.spdk:cnode19492", 00:09:13.070 "tgt_name": "foobar", 00:09:13.070 "method": "nvmf_create_subsystem", 00:09:13.070 "req_id": 1 00:09:13.070 } 00:09:13.070 Got JSON-RPC error response 00:09:13.070 response: 00:09:13.070 { 00:09:13.070 "code": -32603, 00:09:13.070 "message": "Unable to find target foobar" 00:09:13.070 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:13.070 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19841 00:09:13.329 [2024-06-07 23:00:05.458196] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19841: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:13.329 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:13.329 { 00:09:13.329 "nqn": "nqn.2016-06.io.spdk:cnode19841", 00:09:13.329 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:13.329 "method": "nvmf_create_subsystem", 00:09:13.329 "req_id": 1 00:09:13.329 } 00:09:13.329 Got JSON-RPC error response 00:09:13.329 response: 00:09:13.329 { 00:09:13.329 "code": -32602, 00:09:13.329 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:13.329 }' 00:09:13.329 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:13.329 { 00:09:13.329 "nqn": "nqn.2016-06.io.spdk:cnode19841", 00:09:13.329 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:13.329 "method": "nvmf_create_subsystem", 00:09:13.329 "req_id": 1 00:09:13.329 } 00:09:13.329 Got JSON-RPC error response 00:09:13.329 response: 00:09:13.329 { 00:09:13.329 "code": -32602, 00:09:13.329 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:13.329 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:13.329 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:13.329 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13670 00:09:13.586 [2024-06-07 23:00:05.638747] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13670: invalid model number 'SPDK_Controller' 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:13.586 { 00:09:13.586 "nqn": "nqn.2016-06.io.spdk:cnode13670", 00:09:13.586 "model_number": "SPDK_Controller\u001f", 00:09:13.586 "method": "nvmf_create_subsystem", 00:09:13.586 "req_id": 1 00:09:13.586 } 00:09:13.586 Got JSON-RPC error response 00:09:13.586 response: 00:09:13.586 { 00:09:13.586 "code": -32602, 00:09:13.586 "message": "Invalid MN SPDK_Controller\u001f" 00:09:13.586 }' 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:13.586 { 00:09:13.586 "nqn": "nqn.2016-06.io.spdk:cnode13670", 00:09:13.586 "model_number": "SPDK_Controller\u001f", 00:09:13.586 "method": "nvmf_create_subsystem", 00:09:13.586 "req_id": 1 00:09:13.586 } 00:09:13.586 Got JSON-RPC error response 00:09:13.586 response: 00:09:13.586 { 00:09:13.586 "code": -32602, 00:09:13.586 "message": "Invalid MN SPDK_Controller\u001f" 00:09:13.586 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.586 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:09:13.587 23:00:05 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '^'\'':OkixJUZZxrgtPRWKSbPQ?#pLI!D;n1@ iY' 00:09:14.104 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'U_KSbPQ?#pLI!D;n1@ iY' nqn.2016-06.io.spdk:cnode17923 00:09:14.361 [2024-06-07 23:00:06.421365] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17923: invalid model number 'U_KSbPQ?#pLI!D;n1@ iY' 00:09:14.361 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:14.361 { 00:09:14.361 "nqn": "nqn.2016-06.io.spdk:cnode17923", 00:09:14.361 "model_number": "U_KSbPQ?#pLI!D;n1@ iY", 00:09:14.361 "method": "nvmf_create_subsystem", 00:09:14.361 "req_id": 1 00:09:14.361 } 00:09:14.361 Got JSON-RPC error response 00:09:14.361 response: 00:09:14.361 { 00:09:14.361 "code": -32602, 00:09:14.361 "message": "Invalid MN U_KSbPQ?#pLI!D;n1@ iY" 00:09:14.361 }' 00:09:14.361 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:14.361 { 00:09:14.361 "nqn": "nqn.2016-06.io.spdk:cnode17923", 00:09:14.361 "model_number": "U_KSbPQ?#pLI!D;n1@ iY", 00:09:14.361 "method": "nvmf_create_subsystem", 00:09:14.361 "req_id": 1 00:09:14.361 } 00:09:14.361 Got JSON-RPC error response 00:09:14.361 response: 00:09:14.361 { 00:09:14.361 "code": -32602, 00:09:14.361 "message": "Invalid MN U_KSbPQ?#pLI!D;n1@ iY" 00:09:14.361 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:14.361 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:09:14.361 [2024-06-07 23:00:06.638663] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x231c2b0/0x23207a0) succeed. 00:09:14.617 [2024-06-07 23:00:06.647741] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x231d8f0/0x2361e30) succeed. 00:09:14.617 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:14.875 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:09:14.875 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:09:14.875 192.168.100.9' 00:09:14.875 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:14.875 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:09:14.875 23:00:06 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:09:14.875 [2024-06-07 23:00:07.140883] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:15.133 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:15.133 { 00:09:15.133 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:15.133 "listen_address": { 00:09:15.133 "trtype": "rdma", 00:09:15.133 "traddr": "192.168.100.8", 00:09:15.133 "trsvcid": "4421" 00:09:15.133 }, 00:09:15.133 "method": "nvmf_subsystem_remove_listener", 00:09:15.133 "req_id": 1 00:09:15.133 } 00:09:15.133 Got JSON-RPC error response 00:09:15.133 response: 00:09:15.133 { 00:09:15.133 "code": -32602, 00:09:15.133 "message": "Invalid parameters" 00:09:15.133 }' 00:09:15.133 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:15.133 { 00:09:15.133 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:15.133 "listen_address": { 00:09:15.133 "trtype": "rdma", 00:09:15.133 "traddr": "192.168.100.8", 00:09:15.133 "trsvcid": "4421" 00:09:15.133 }, 00:09:15.133 "method": "nvmf_subsystem_remove_listener", 00:09:15.133 "req_id": 1 00:09:15.133 } 00:09:15.133 Got JSON-RPC error response 00:09:15.133 response: 00:09:15.133 { 00:09:15.133 "code": -32602, 00:09:15.133 "message": "Invalid parameters" 00:09:15.133 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:15.133 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31784 -i 0 00:09:15.133 [2024-06-07 23:00:07.325485] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31784: invalid cntlid range [0-65519] 00:09:15.133 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:15.133 { 00:09:15.133 "nqn": "nqn.2016-06.io.spdk:cnode31784", 00:09:15.133 "min_cntlid": 0, 00:09:15.133 "method": "nvmf_create_subsystem", 00:09:15.133 "req_id": 1 00:09:15.133 } 00:09:15.133 Got JSON-RPC error response 00:09:15.133 response: 00:09:15.133 { 00:09:15.133 "code": -32602, 00:09:15.133 "message": "Invalid cntlid range [0-65519]" 00:09:15.133 }' 00:09:15.133 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:15.133 { 00:09:15.133 "nqn": "nqn.2016-06.io.spdk:cnode31784", 00:09:15.133 "min_cntlid": 0, 00:09:15.133 "method": "nvmf_create_subsystem", 00:09:15.133 "req_id": 1 00:09:15.133 } 00:09:15.133 Got JSON-RPC error response 00:09:15.133 response: 00:09:15.133 { 00:09:15.133 "code": -32602, 00:09:15.133 "message": "Invalid cntlid range [0-65519]" 00:09:15.133 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:15.133 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32763 -i 65520 00:09:15.391 [2024-06-07 23:00:07.514138] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32763: invalid cntlid range [65520-65519] 00:09:15.391 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:15.391 { 00:09:15.391 "nqn": "nqn.2016-06.io.spdk:cnode32763", 00:09:15.391 "min_cntlid": 65520, 00:09:15.391 "method": "nvmf_create_subsystem", 00:09:15.391 "req_id": 1 00:09:15.391 } 00:09:15.391 Got JSON-RPC error response 00:09:15.391 response: 00:09:15.391 { 00:09:15.391 "code": -32602, 00:09:15.391 "message": "Invalid cntlid range [65520-65519]" 00:09:15.391 }' 00:09:15.391 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:15.391 { 00:09:15.391 "nqn": "nqn.2016-06.io.spdk:cnode32763", 00:09:15.391 "min_cntlid": 65520, 00:09:15.391 "method": "nvmf_create_subsystem", 00:09:15.391 "req_id": 1 00:09:15.391 } 00:09:15.391 Got JSON-RPC error response 00:09:15.391 response: 00:09:15.391 { 00:09:15.391 "code": -32602, 00:09:15.391 "message": "Invalid cntlid range [65520-65519]" 00:09:15.391 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:15.391 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21441 -I 0 00:09:15.650 [2024-06-07 23:00:07.698786] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21441: invalid cntlid range [1-0] 00:09:15.650 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:15.650 { 00:09:15.650 "nqn": "nqn.2016-06.io.spdk:cnode21441", 00:09:15.650 "max_cntlid": 0, 00:09:15.650 "method": "nvmf_create_subsystem", 00:09:15.650 "req_id": 1 00:09:15.650 } 00:09:15.650 Got JSON-RPC error response 00:09:15.650 response: 00:09:15.650 { 00:09:15.650 "code": -32602, 00:09:15.650 "message": "Invalid cntlid range [1-0]" 00:09:15.650 }' 00:09:15.650 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:15.650 { 00:09:15.650 "nqn": "nqn.2016-06.io.spdk:cnode21441", 00:09:15.650 "max_cntlid": 0, 00:09:15.650 "method": "nvmf_create_subsystem", 00:09:15.650 "req_id": 1 00:09:15.650 } 00:09:15.650 Got JSON-RPC error response 00:09:15.650 response: 00:09:15.650 { 00:09:15.650 "code": -32602, 00:09:15.650 "message": "Invalid cntlid range [1-0]" 00:09:15.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:15.650 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24617 -I 65520 00:09:15.650 [2024-06-07 23:00:07.887455] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24617: invalid cntlid range [1-65520] 00:09:15.650 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:15.650 { 00:09:15.650 "nqn": "nqn.2016-06.io.spdk:cnode24617", 00:09:15.650 "max_cntlid": 65520, 00:09:15.650 "method": "nvmf_create_subsystem", 00:09:15.650 "req_id": 1 00:09:15.650 } 00:09:15.650 Got JSON-RPC error response 00:09:15.650 response: 00:09:15.650 { 00:09:15.650 "code": -32602, 00:09:15.650 "message": "Invalid cntlid range [1-65520]" 00:09:15.650 }' 00:09:15.650 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:15.650 { 00:09:15.650 "nqn": "nqn.2016-06.io.spdk:cnode24617", 00:09:15.650 "max_cntlid": 65520, 00:09:15.650 "method": "nvmf_create_subsystem", 00:09:15.650 "req_id": 1 00:09:15.650 } 00:09:15.650 Got JSON-RPC error response 00:09:15.650 response: 00:09:15.650 { 00:09:15.650 "code": -32602, 00:09:15.650 "message": "Invalid cntlid range [1-65520]" 00:09:15.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:15.650 23:00:07 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6982 -i 6 -I 5 00:09:15.909 [2024-06-07 23:00:08.088245] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6982: invalid cntlid range [6-5] 00:09:15.909 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:15.909 { 00:09:15.909 "nqn": "nqn.2016-06.io.spdk:cnode6982", 00:09:15.909 "min_cntlid": 6, 00:09:15.909 "max_cntlid": 5, 00:09:15.909 "method": "nvmf_create_subsystem", 00:09:15.909 "req_id": 1 00:09:15.909 } 00:09:15.909 Got JSON-RPC error response 00:09:15.909 response: 00:09:15.909 { 00:09:15.909 "code": -32602, 00:09:15.909 "message": "Invalid cntlid range [6-5]" 00:09:15.909 }' 00:09:15.909 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:15.909 { 00:09:15.909 "nqn": "nqn.2016-06.io.spdk:cnode6982", 00:09:15.909 "min_cntlid": 6, 00:09:15.909 "max_cntlid": 5, 00:09:15.909 "method": "nvmf_create_subsystem", 00:09:15.909 "req_id": 1 00:09:15.909 } 00:09:15.909 Got JSON-RPC error response 00:09:15.909 response: 00:09:15.909 { 00:09:15.909 "code": -32602, 00:09:15.909 "message": "Invalid cntlid range [6-5]" 00:09:15.909 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:15.909 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:16.168 { 00:09:16.168 "name": "foobar", 00:09:16.168 "method": "nvmf_delete_target", 00:09:16.168 "req_id": 1 00:09:16.168 } 00:09:16.168 Got JSON-RPC error response 00:09:16.168 response: 00:09:16.168 { 00:09:16.168 "code": -32602, 00:09:16.168 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:16.168 }' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:16.168 { 00:09:16.168 "name": "foobar", 00:09:16.168 "method": "nvmf_delete_target", 00:09:16.168 "req_id": 1 00:09:16.168 } 00:09:16.168 Got JSON-RPC error response 00:09:16.168 response: 00:09:16.168 { 00:09:16.168 "code": -32602, 00:09:16.168 "message": "The specified target doesn't exist, cannot delete it." 00:09:16.168 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:16.168 rmmod nvme_rdma 00:09:16.168 rmmod nvme_fabrics 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 806580 ']' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 806580 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 806580 ']' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 806580 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 806580 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 806580' 00:09:16.168 killing process with pid 806580 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 806580 00:09:16.168 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 806580 00:09:16.435 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.435 23:00:08 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:16.435 00:09:16.435 real 0m10.448s 00:09:16.435 user 0m20.617s 00:09:16.435 sys 0m5.487s 00:09:16.435 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:16.435 23:00:08 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:16.435 ************************************ 00:09:16.435 END TEST nvmf_invalid 00:09:16.435 ************************************ 00:09:16.435 23:00:08 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:16.435 23:00:08 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:16.435 23:00:08 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:16.435 23:00:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:16.435 ************************************ 00:09:16.435 START TEST nvmf_abort 00:09:16.435 ************************************ 00:09:16.435 23:00:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:16.435 * Looking for test storage... 00:09:16.723 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:16.724 23:00:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:23.318 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:23.318 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:23.318 Found net devices under 0000:da:00.0: mlx_0_0 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:23.318 Found net devices under 0000:da:00.1: mlx_0_1 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:09:23.318 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:23.319 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:23.319 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:23.319 altname enp218s0f0np0 00:09:23.319 altname ens818f0np0 00:09:23.319 inet 192.168.100.8/24 scope global mlx_0_0 00:09:23.319 valid_lft forever preferred_lft forever 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:23.319 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:23.319 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:23.319 altname enp218s0f1np1 00:09:23.319 altname ens818f1np1 00:09:23.319 inet 192.168.100.9/24 scope global mlx_0_1 00:09:23.319 valid_lft forever preferred_lft forever 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:23.319 192.168.100.9' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:23.319 192.168.100.9' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:23.319 192.168.100.9' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=811279 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 811279 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 811279 ']' 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:23.319 23:00:14 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.319 [2024-06-07 23:00:14.749740] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:09:23.319 [2024-06-07 23:00:14.749785] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.319 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.319 [2024-06-07 23:00:14.810596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.319 [2024-06-07 23:00:14.882543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.319 [2024-06-07 23:00:14.882585] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.320 [2024-06-07 23:00:14.882592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.320 [2024-06-07 23:00:14.882597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.320 [2024-06-07 23:00:14.882602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.320 [2024-06-07 23:00:14.882716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.320 [2024-06-07 23:00:14.882792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.320 [2024-06-07 23:00:14.882793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.320 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 [2024-06-07 23:00:15.617776] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b151f0/0x1b196e0) succeed. 00:09:23.577 [2024-06-07 23:00:15.626830] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b16790/0x1b5ad70) succeed. 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 Malloc0 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 Delay0 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.577 [2024-06-07 23:00:15.777474] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.577 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:23.578 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.578 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.578 23:00:15 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.578 23:00:15 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:23.578 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.836 [2024-06-07 23:00:15.865342] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.733 Initializing NVMe Controllers 00:09:25.733 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:25.733 controller IO queue size 128 less than required 00:09:25.733 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:25.733 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:25.733 Initialization complete. Launching workers. 00:09:25.733 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 52502 00:09:25.733 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 52563, failed to submit 62 00:09:25.733 success 52503, unsuccess 60, failed 0 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.733 23:00:17 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:25.733 rmmod nvme_rdma 00:09:25.733 rmmod nvme_fabrics 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 811279 ']' 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 811279 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 811279 ']' 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 811279 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 811279 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 811279' 00:09:25.991 killing process with pid 811279 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@968 -- # kill 811279 00:09:25.991 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@973 -- # wait 811279 00:09:26.249 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:26.249 00:09:26.249 real 0m9.690s 00:09:26.249 user 0m14.200s 00:09:26.249 sys 0m4.830s 00:09:26.249 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:26.249 23:00:18 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:26.249 ************************************ 00:09:26.249 END TEST nvmf_abort 00:09:26.249 ************************************ 00:09:26.249 23:00:18 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:26.249 23:00:18 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:26.249 23:00:18 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:26.249 23:00:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:26.249 ************************************ 00:09:26.249 START TEST nvmf_ns_hotplug_stress 00:09:26.249 ************************************ 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:26.249 * Looking for test storage... 00:09:26.249 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.249 23:00:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:32.809 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:32.809 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:32.809 Found net devices under 0000:da:00.0: mlx_0_0 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:32.809 Found net devices under 0000:da:00.1: mlx_0_1 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:32.809 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:32.810 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:32.810 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:32.810 altname enp218s0f0np0 00:09:32.810 altname ens818f0np0 00:09:32.810 inet 192.168.100.8/24 scope global mlx_0_0 00:09:32.810 valid_lft forever preferred_lft forever 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:32.810 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:32.810 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:32.810 altname enp218s0f1np1 00:09:32.810 altname ens818f1np1 00:09:32.810 inet 192.168.100.9/24 scope global mlx_0_1 00:09:32.810 valid_lft forever preferred_lft forever 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:32.810 192.168.100.9' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:32.810 192.168.100.9' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:32.810 192.168.100.9' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=815278 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 815278 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 815278 ']' 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:32.810 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.811 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:32.811 23:00:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.811 [2024-06-07 23:00:24.466818] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:09:32.811 [2024-06-07 23:00:24.466866] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.811 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.811 [2024-06-07 23:00:24.527142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.811 [2024-06-07 23:00:24.605689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.811 [2024-06-07 23:00:24.605724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.811 [2024-06-07 23:00:24.605731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.811 [2024-06-07 23:00:24.605740] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.811 [2024-06-07 23:00:24.605745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.811 [2024-06-07 23:00:24.605840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.811 [2024-06-07 23:00:24.605925] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.811 [2024-06-07 23:00:24.605926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:33.067 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:33.324 [2024-06-07 23:00:25.478262] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9531f0/0x9576e0) succeed. 00:09:33.324 [2024-06-07 23:00:25.487405] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x954790/0x998d70) succeed. 00:09:33.581 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.581 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:33.839 [2024-06-07 23:00:25.945020] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:33.839 23:00:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:34.097 23:00:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:34.097 Malloc0 00:09:34.097 23:00:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:34.353 Delay0 00:09:34.353 23:00:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.610 23:00:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:34.610 NULL1 00:09:34.610 23:00:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:34.867 23:00:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:34.867 23:00:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=815736 00:09:34.867 23:00:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:34.867 23:00:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.867 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.239 Read completed with error (sct=0, sc=11) 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 23:00:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.239 23:00:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:36.239 23:00:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:36.495 true 00:09:36.495 23:00:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:36.495 23:00:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.428 23:00:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.428 23:00:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:37.428 23:00:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:37.686 true 00:09:37.686 23:00:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:37.686 23:00:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 23:00:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.620 23:00:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:38.620 23:00:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:38.878 true 00:09:38.878 23:00:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:38.878 23:00:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 23:00:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.812 23:00:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:39.812 23:00:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:39.812 true 00:09:40.070 23:00:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:40.070 23:00:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.002 23:00:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.003 23:00:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:41.003 23:00:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:41.003 true 00:09:41.260 23:00:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:41.260 23:00:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.859 23:00:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.116 23:00:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:42.116 23:00:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:42.373 true 00:09:42.373 23:00:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:42.373 23:00:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 23:00:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.304 23:00:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:43.304 23:00:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:43.562 true 00:09:43.562 23:00:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:43.562 23:00:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 23:00:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.495 23:00:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:44.495 23:00:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:44.754 true 00:09:44.754 23:00:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:44.754 23:00:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.687 23:00:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.687 23:00:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:45.687 23:00:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:45.945 true 00:09:45.945 23:00:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:45.945 23:00:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 23:00:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.879 23:00:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:46.879 23:00:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:47.137 true 00:09:47.137 23:00:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:47.137 23:00:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 23:00:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.070 23:00:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:48.070 23:00:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:48.328 true 00:09:48.328 23:00:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:48.328 23:00:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.261 23:00:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.261 23:00:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:49.261 23:00:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:49.518 true 00:09:49.518 23:00:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:49.518 23:00:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 23:00:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.452 23:00:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:50.452 23:00:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:50.710 true 00:09:50.710 23:00:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:50.710 23:00:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 23:00:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.653 23:00:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:51.653 23:00:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:51.911 true 00:09:51.911 23:00:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:51.911 23:00:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 23:00:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.845 23:00:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:52.845 23:00:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:53.103 true 00:09:53.103 23:00:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:53.103 23:00:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 23:00:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.037 23:00:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:54.037 23:00:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:54.295 true 00:09:54.295 23:00:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:54.295 23:00:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 23:00:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.228 23:00:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:55.228 23:00:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:55.485 true 00:09:55.485 23:00:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:55.485 23:00:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.415 23:00:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.415 23:00:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:56.415 23:00:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:56.672 true 00:09:56.672 23:00:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:56.672 23:00:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.672 23:00:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.930 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:56.930 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:57.188 true 00:09:57.188 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:57.188 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.188 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.446 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:57.446 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:57.703 true 00:09:57.703 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:57.703 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.961 23:00:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.961 23:00:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:57.961 23:00:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:58.221 true 00:09:58.221 23:00:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:58.221 23:00:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.593 23:00:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.594 23:00:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:59.594 23:00:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:59.594 true 00:09:59.594 23:00:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:09:59.594 23:00:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 23:00:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.525 23:00:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:00.782 23:00:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:00.782 true 00:10:00.783 23:00:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:00.783 23:00:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.711 23:00:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.969 23:00:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:01.969 23:00:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:01.969 true 00:10:01.969 23:00:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:01.969 23:00:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.904 23:00:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.162 23:00:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:03.162 23:00:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:03.162 true 00:10:03.162 23:00:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:03.162 23:00:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.097 23:00:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.355 23:00:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:04.355 23:00:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:04.355 true 00:10:04.355 23:00:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:04.355 23:00:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.361 23:00:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.361 23:00:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:05.361 23:00:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:05.619 true 00:10:05.619 23:00:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:05.619 23:00:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.877 23:00:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.877 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:05.877 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:06.136 true 00:10:06.136 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:06.136 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.394 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.394 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:06.394 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:06.653 true 00:10:06.653 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:06.653 23:00:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.911 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.170 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:07.170 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:07.170 true 00:10:07.170 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:07.170 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.170 Initializing NVMe Controllers 00:10:07.170 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.170 Controller IO queue size 128, less than required. 00:10:07.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:07.170 Controller IO queue size 128, less than required. 00:10:07.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:07.170 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:07.170 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:07.170 Initialization complete. Launching workers. 00:10:07.170 ======================================================== 00:10:07.170 Latency(us) 00:10:07.170 Device Information : IOPS MiB/s Average min max 00:10:07.170 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4989.20 2.44 22598.99 865.37 1138113.22 00:10:07.170 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33624.73 16.42 3806.61 2251.00 296254.47 00:10:07.170 ======================================================== 00:10:07.170 Total : 38613.93 18.85 6234.72 865.37 1138113.22 00:10:07.170 00:10:07.429 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.687 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:07.687 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:07.687 true 00:10:07.687 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 815736 00:10:07.687 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (815736) - No such process 00:10:07.687 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 815736 00:10:07.687 23:00:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.946 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:08.205 null0 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.205 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:08.463 null1 00:10:08.463 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.463 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.463 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:08.720 null2 00:10:08.720 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.720 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.720 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:08.720 null3 00:10:08.720 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.720 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.720 23:01:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:08.978 null4 00:10:08.978 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.978 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.978 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:09.236 null5 00:10:09.236 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:09.236 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:09.236 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:09.236 null6 00:10:09.236 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:09.236 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:09.236 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:09.495 null7 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.495 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 821660 821662 821666 821669 821672 821676 821678 821680 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.496 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.755 23:01:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.014 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.015 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.274 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.533 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.792 23:01:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.050 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.308 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.567 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.825 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.826 23:01:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.826 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.084 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.084 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.085 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.343 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:12.343 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:12.343 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.343 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.344 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:12.344 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.344 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:12.344 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.602 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.603 23:01:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.862 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.121 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:13.380 rmmod nvme_rdma 00:10:13.380 rmmod nvme_fabrics 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 815278 ']' 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 815278 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 815278 ']' 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 815278 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 815278 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 815278' 00:10:13.380 killing process with pid 815278 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 815278 00:10:13.380 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 815278 00:10:13.639 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.639 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:13.639 00:10:13.639 real 0m47.372s 00:10:13.639 user 3m17.593s 00:10:13.639 sys 0m11.939s 00:10:13.639 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:13.639 23:01:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.639 ************************************ 00:10:13.639 END TEST nvmf_ns_hotplug_stress 00:10:13.639 ************************************ 00:10:13.639 23:01:05 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:13.639 23:01:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:13.639 23:01:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:13.639 23:01:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:13.639 ************************************ 00:10:13.639 START TEST nvmf_connect_stress 00:10:13.639 ************************************ 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:13.639 * Looking for test storage... 00:10:13.639 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.639 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.898 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.899 23:01:05 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.464 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:20.465 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:20.465 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:20.465 Found net devices under 0000:da:00.0: mlx_0_0 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:20.465 Found net devices under 0000:da:00.1: mlx_0_1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:20.465 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:20.465 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:20.465 altname enp218s0f0np0 00:10:20.465 altname ens818f0np0 00:10:20.465 inet 192.168.100.8/24 scope global mlx_0_0 00:10:20.465 valid_lft forever preferred_lft forever 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:20.465 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:20.465 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:20.465 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:20.465 altname enp218s0f1np1 00:10:20.466 altname ens818f1np1 00:10:20.466 inet 192.168.100.9/24 scope global mlx_0_1 00:10:20.466 valid_lft forever preferred_lft forever 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:20.466 192.168.100.9' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:20.466 192.168.100.9' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:20.466 192.168.100.9' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=825982 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 825982 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 825982 ']' 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:20.466 23:01:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.466 [2024-06-07 23:01:11.935767] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:10:20.466 [2024-06-07 23:01:11.935817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.466 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.466 [2024-06-07 23:01:11.995060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.466 [2024-06-07 23:01:12.074893] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.466 [2024-06-07 23:01:12.074926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.466 [2024-06-07 23:01:12.074933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.466 [2024-06-07 23:01:12.074939] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.466 [2024-06-07 23:01:12.074944] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.466 [2024-06-07 23:01:12.075000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.466 [2024-06-07 23:01:12.075027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.466 [2024-06-07 23:01:12.075029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 [2024-06-07 23:01:12.818948] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa811f0/0xa856e0) succeed. 00:10:20.725 [2024-06-07 23:01:12.827963] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa82790/0xac6d70) succeed. 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 [2024-06-07 23:01:12.942669] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 NULL1 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=826230 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:20.725 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.726 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.726 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:20.991 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.250 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.250 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:21.250 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.250 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.250 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.509 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.509 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:21.509 23:01:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.509 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.509 23:01:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.767 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.767 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:21.767 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.767 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.767 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.334 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:22.334 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:22.334 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.334 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:22.334 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.593 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:22.593 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:22.593 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.593 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:22.593 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.852 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:22.852 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:22.852 23:01:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.852 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:22.852 23:01:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.110 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:23.110 23:01:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:23.110 23:01:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.110 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:23.110 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.369 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:23.369 23:01:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:23.369 23:01:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.369 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:23.369 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.936 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:23.936 23:01:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:23.936 23:01:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.936 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:23.936 23:01:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.195 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.195 23:01:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:24.195 23:01:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.195 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.195 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.453 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.453 23:01:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:24.453 23:01:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.453 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.453 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.712 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.712 23:01:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:24.712 23:01:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.712 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.712 23:01:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.278 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.278 23:01:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:25.278 23:01:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.278 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.278 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.536 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.536 23:01:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:25.536 23:01:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.536 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.536 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.794 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.794 23:01:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:25.794 23:01:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.794 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.794 23:01:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.053 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:26.053 23:01:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:26.053 23:01:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.053 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:26.053 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.321 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:26.321 23:01:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:26.321 23:01:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.321 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:26.321 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.925 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:26.925 23:01:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:26.925 23:01:18 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.925 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:26.925 23:01:18 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.183 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.183 23:01:19 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:27.183 23:01:19 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.183 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.183 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.441 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.441 23:01:19 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:27.441 23:01:19 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.441 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.441 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.699 23:01:19 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:27.699 23:01:19 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.699 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.699 23:01:19 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.957 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.957 23:01:20 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:27.957 23:01:20 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.957 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.957 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.523 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.523 23:01:20 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:28.523 23:01:20 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.523 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.523 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.781 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:28.781 23:01:20 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:28.781 23:01:20 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.781 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:28.781 23:01:20 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.039 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.039 23:01:21 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:29.039 23:01:21 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.039 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.039 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.298 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.298 23:01:21 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:29.298 23:01:21 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.298 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.298 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.865 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:29.865 23:01:21 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:29.865 23:01:21 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.865 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:29.865 23:01:21 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:30.124 23:01:22 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:30.124 23:01:22 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.124 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:30.124 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.382 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:30.382 23:01:22 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:30.382 23:01:22 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.382 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:30.382 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.641 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:30.641 23:01:22 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:30.641 23:01:22 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.641 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:30.641 23:01:22 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.900 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 826230 00:10:30.900 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (826230) - No such process 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 826230 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.900 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:30.900 rmmod nvme_rdma 00:10:31.160 rmmod nvme_fabrics 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 825982 ']' 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 825982 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 825982 ']' 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 825982 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 825982 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 825982' 00:10:31.160 killing process with pid 825982 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 825982 00:10:31.160 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 825982 00:10:31.420 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:31.420 00:10:31.420 real 0m17.670s 00:10:31.420 user 0m41.989s 00:10:31.420 sys 0m6.257s 00:10:31.420 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:31.420 23:01:23 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.420 ************************************ 00:10:31.420 END TEST nvmf_connect_stress 00:10:31.420 ************************************ 00:10:31.420 23:01:23 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:31.420 23:01:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:31.420 23:01:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:31.420 23:01:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:31.420 ************************************ 00:10:31.420 START TEST nvmf_fused_ordering 00:10:31.420 ************************************ 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:31.420 * Looking for test storage... 00:10:31.420 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.420 23:01:23 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.421 23:01:23 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:37.991 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:37.991 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:37.992 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:37.992 Found net devices under 0000:da:00.0: mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:37.992 Found net devices under 0000:da:00.1: mlx_0_1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:37.992 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:37.992 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:37.992 altname enp218s0f0np0 00:10:37.992 altname ens818f0np0 00:10:37.992 inet 192.168.100.8/24 scope global mlx_0_0 00:10:37.992 valid_lft forever preferred_lft forever 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:37.992 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:37.992 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:37.992 altname enp218s0f1np1 00:10:37.992 altname ens818f1np1 00:10:37.992 inet 192.168.100.9/24 scope global mlx_0_1 00:10:37.992 valid_lft forever preferred_lft forever 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.992 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:37.993 192.168.100.9' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:37.993 192.168.100.9' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:37.993 192.168.100.9' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=831440 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 831440 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 831440 ']' 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:37.993 23:01:29 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.993 [2024-06-07 23:01:29.809704] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:10:37.993 [2024-06-07 23:01:29.809746] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.993 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.993 [2024-06-07 23:01:29.867715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.993 [2024-06-07 23:01:29.942910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.993 [2024-06-07 23:01:29.942948] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.993 [2024-06-07 23:01:29.942955] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.993 [2024-06-07 23:01:29.942960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.993 [2024-06-07 23:01:29.942965] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.993 [2024-06-07 23:01:29.942987] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.561 [2024-06-07 23:01:30.659006] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1280b30/0x1285020) succeed. 00:10:38.561 [2024-06-07 23:01:30.669068] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1282030/0x12c66b0) succeed. 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.561 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.562 [2024-06-07 23:01:30.724561] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.562 NULL1 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.562 23:01:30 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.562 [2024-06-07 23:01:30.777176] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:10:38.562 [2024-06-07 23:01:30.777214] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831619 ] 00:10:38.562 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.821 Attached to nqn.2016-06.io.spdk:cnode1 00:10:38.821 Namespace ID: 1 size: 1GB 00:10:38.821 fused_ordering(0) 00:10:38.821 fused_ordering(1) 00:10:38.821 fused_ordering(2) 00:10:38.821 fused_ordering(3) 00:10:38.821 fused_ordering(4) 00:10:38.821 fused_ordering(5) 00:10:38.821 fused_ordering(6) 00:10:38.821 fused_ordering(7) 00:10:38.821 fused_ordering(8) 00:10:38.821 fused_ordering(9) 00:10:38.821 fused_ordering(10) 00:10:38.821 fused_ordering(11) 00:10:38.821 fused_ordering(12) 00:10:38.821 fused_ordering(13) 00:10:38.821 fused_ordering(14) 00:10:38.821 fused_ordering(15) 00:10:38.821 fused_ordering(16) 00:10:38.821 fused_ordering(17) 00:10:38.821 fused_ordering(18) 00:10:38.821 fused_ordering(19) 00:10:38.821 fused_ordering(20) 00:10:38.821 fused_ordering(21) 00:10:38.821 fused_ordering(22) 00:10:38.821 fused_ordering(23) 00:10:38.821 fused_ordering(24) 00:10:38.821 fused_ordering(25) 00:10:38.821 fused_ordering(26) 00:10:38.821 fused_ordering(27) 00:10:38.821 fused_ordering(28) 00:10:38.821 fused_ordering(29) 00:10:38.821 fused_ordering(30) 00:10:38.821 fused_ordering(31) 00:10:38.821 fused_ordering(32) 00:10:38.821 fused_ordering(33) 00:10:38.821 fused_ordering(34) 00:10:38.821 fused_ordering(35) 00:10:38.821 fused_ordering(36) 00:10:38.821 fused_ordering(37) 00:10:38.821 fused_ordering(38) 00:10:38.821 fused_ordering(39) 00:10:38.821 fused_ordering(40) 00:10:38.821 fused_ordering(41) 00:10:38.821 fused_ordering(42) 00:10:38.821 fused_ordering(43) 00:10:38.821 fused_ordering(44) 00:10:38.821 fused_ordering(45) 00:10:38.821 fused_ordering(46) 00:10:38.821 fused_ordering(47) 00:10:38.821 fused_ordering(48) 00:10:38.821 fused_ordering(49) 00:10:38.821 fused_ordering(50) 00:10:38.821 fused_ordering(51) 00:10:38.821 fused_ordering(52) 00:10:38.821 fused_ordering(53) 00:10:38.821 fused_ordering(54) 00:10:38.821 fused_ordering(55) 00:10:38.821 fused_ordering(56) 00:10:38.821 fused_ordering(57) 00:10:38.821 fused_ordering(58) 00:10:38.821 fused_ordering(59) 00:10:38.821 fused_ordering(60) 00:10:38.821 fused_ordering(61) 00:10:38.821 fused_ordering(62) 00:10:38.821 fused_ordering(63) 00:10:38.821 fused_ordering(64) 00:10:38.821 fused_ordering(65) 00:10:38.821 fused_ordering(66) 00:10:38.821 fused_ordering(67) 00:10:38.821 fused_ordering(68) 00:10:38.821 fused_ordering(69) 00:10:38.821 fused_ordering(70) 00:10:38.821 fused_ordering(71) 00:10:38.821 fused_ordering(72) 00:10:38.821 fused_ordering(73) 00:10:38.821 fused_ordering(74) 00:10:38.821 fused_ordering(75) 00:10:38.821 fused_ordering(76) 00:10:38.821 fused_ordering(77) 00:10:38.821 fused_ordering(78) 00:10:38.821 fused_ordering(79) 00:10:38.821 fused_ordering(80) 00:10:38.821 fused_ordering(81) 00:10:38.821 fused_ordering(82) 00:10:38.821 fused_ordering(83) 00:10:38.821 fused_ordering(84) 00:10:38.821 fused_ordering(85) 00:10:38.821 fused_ordering(86) 00:10:38.821 fused_ordering(87) 00:10:38.821 fused_ordering(88) 00:10:38.821 fused_ordering(89) 00:10:38.821 fused_ordering(90) 00:10:38.821 fused_ordering(91) 00:10:38.821 fused_ordering(92) 00:10:38.821 fused_ordering(93) 00:10:38.821 fused_ordering(94) 00:10:38.821 fused_ordering(95) 00:10:38.821 fused_ordering(96) 00:10:38.821 fused_ordering(97) 00:10:38.821 fused_ordering(98) 00:10:38.821 fused_ordering(99) 00:10:38.821 fused_ordering(100) 00:10:38.821 fused_ordering(101) 00:10:38.821 fused_ordering(102) 00:10:38.821 fused_ordering(103) 00:10:38.821 fused_ordering(104) 00:10:38.821 fused_ordering(105) 00:10:38.821 fused_ordering(106) 00:10:38.821 fused_ordering(107) 00:10:38.821 fused_ordering(108) 00:10:38.821 fused_ordering(109) 00:10:38.821 fused_ordering(110) 00:10:38.821 fused_ordering(111) 00:10:38.821 fused_ordering(112) 00:10:38.821 fused_ordering(113) 00:10:38.821 fused_ordering(114) 00:10:38.821 fused_ordering(115) 00:10:38.821 fused_ordering(116) 00:10:38.821 fused_ordering(117) 00:10:38.821 fused_ordering(118) 00:10:38.821 fused_ordering(119) 00:10:38.821 fused_ordering(120) 00:10:38.821 fused_ordering(121) 00:10:38.821 fused_ordering(122) 00:10:38.821 fused_ordering(123) 00:10:38.821 fused_ordering(124) 00:10:38.821 fused_ordering(125) 00:10:38.822 fused_ordering(126) 00:10:38.822 fused_ordering(127) 00:10:38.822 fused_ordering(128) 00:10:38.822 fused_ordering(129) 00:10:38.822 fused_ordering(130) 00:10:38.822 fused_ordering(131) 00:10:38.822 fused_ordering(132) 00:10:38.822 fused_ordering(133) 00:10:38.822 fused_ordering(134) 00:10:38.822 fused_ordering(135) 00:10:38.822 fused_ordering(136) 00:10:38.822 fused_ordering(137) 00:10:38.822 fused_ordering(138) 00:10:38.822 fused_ordering(139) 00:10:38.822 fused_ordering(140) 00:10:38.822 fused_ordering(141) 00:10:38.822 fused_ordering(142) 00:10:38.822 fused_ordering(143) 00:10:38.822 fused_ordering(144) 00:10:38.822 fused_ordering(145) 00:10:38.822 fused_ordering(146) 00:10:38.822 fused_ordering(147) 00:10:38.822 fused_ordering(148) 00:10:38.822 fused_ordering(149) 00:10:38.822 fused_ordering(150) 00:10:38.822 fused_ordering(151) 00:10:38.822 fused_ordering(152) 00:10:38.822 fused_ordering(153) 00:10:38.822 fused_ordering(154) 00:10:38.822 fused_ordering(155) 00:10:38.822 fused_ordering(156) 00:10:38.822 fused_ordering(157) 00:10:38.822 fused_ordering(158) 00:10:38.822 fused_ordering(159) 00:10:38.822 fused_ordering(160) 00:10:38.822 fused_ordering(161) 00:10:38.822 fused_ordering(162) 00:10:38.822 fused_ordering(163) 00:10:38.822 fused_ordering(164) 00:10:38.822 fused_ordering(165) 00:10:38.822 fused_ordering(166) 00:10:38.822 fused_ordering(167) 00:10:38.822 fused_ordering(168) 00:10:38.822 fused_ordering(169) 00:10:38.822 fused_ordering(170) 00:10:38.822 fused_ordering(171) 00:10:38.822 fused_ordering(172) 00:10:38.822 fused_ordering(173) 00:10:38.822 fused_ordering(174) 00:10:38.822 fused_ordering(175) 00:10:38.822 fused_ordering(176) 00:10:38.822 fused_ordering(177) 00:10:38.822 fused_ordering(178) 00:10:38.822 fused_ordering(179) 00:10:38.822 fused_ordering(180) 00:10:38.822 fused_ordering(181) 00:10:38.822 fused_ordering(182) 00:10:38.822 fused_ordering(183) 00:10:38.822 fused_ordering(184) 00:10:38.822 fused_ordering(185) 00:10:38.822 fused_ordering(186) 00:10:38.822 fused_ordering(187) 00:10:38.822 fused_ordering(188) 00:10:38.822 fused_ordering(189) 00:10:38.822 fused_ordering(190) 00:10:38.822 fused_ordering(191) 00:10:38.822 fused_ordering(192) 00:10:38.822 fused_ordering(193) 00:10:38.822 fused_ordering(194) 00:10:38.822 fused_ordering(195) 00:10:38.822 fused_ordering(196) 00:10:38.822 fused_ordering(197) 00:10:38.822 fused_ordering(198) 00:10:38.822 fused_ordering(199) 00:10:38.822 fused_ordering(200) 00:10:38.822 fused_ordering(201) 00:10:38.822 fused_ordering(202) 00:10:38.822 fused_ordering(203) 00:10:38.822 fused_ordering(204) 00:10:38.822 fused_ordering(205) 00:10:38.822 fused_ordering(206) 00:10:38.822 fused_ordering(207) 00:10:38.822 fused_ordering(208) 00:10:38.822 fused_ordering(209) 00:10:38.822 fused_ordering(210) 00:10:38.822 fused_ordering(211) 00:10:38.822 fused_ordering(212) 00:10:38.822 fused_ordering(213) 00:10:38.822 fused_ordering(214) 00:10:38.822 fused_ordering(215) 00:10:38.822 fused_ordering(216) 00:10:38.822 fused_ordering(217) 00:10:38.822 fused_ordering(218) 00:10:38.822 fused_ordering(219) 00:10:38.822 fused_ordering(220) 00:10:38.822 fused_ordering(221) 00:10:38.822 fused_ordering(222) 00:10:38.822 fused_ordering(223) 00:10:38.822 fused_ordering(224) 00:10:38.822 fused_ordering(225) 00:10:38.822 fused_ordering(226) 00:10:38.822 fused_ordering(227) 00:10:38.822 fused_ordering(228) 00:10:38.822 fused_ordering(229) 00:10:38.822 fused_ordering(230) 00:10:38.822 fused_ordering(231) 00:10:38.822 fused_ordering(232) 00:10:38.822 fused_ordering(233) 00:10:38.822 fused_ordering(234) 00:10:38.822 fused_ordering(235) 00:10:38.822 fused_ordering(236) 00:10:38.822 fused_ordering(237) 00:10:38.822 fused_ordering(238) 00:10:38.822 fused_ordering(239) 00:10:38.822 fused_ordering(240) 00:10:38.822 fused_ordering(241) 00:10:38.822 fused_ordering(242) 00:10:38.822 fused_ordering(243) 00:10:38.822 fused_ordering(244) 00:10:38.822 fused_ordering(245) 00:10:38.822 fused_ordering(246) 00:10:38.822 fused_ordering(247) 00:10:38.822 fused_ordering(248) 00:10:38.822 fused_ordering(249) 00:10:38.822 fused_ordering(250) 00:10:38.822 fused_ordering(251) 00:10:38.822 fused_ordering(252) 00:10:38.822 fused_ordering(253) 00:10:38.822 fused_ordering(254) 00:10:38.822 fused_ordering(255) 00:10:38.822 fused_ordering(256) 00:10:38.822 fused_ordering(257) 00:10:38.822 fused_ordering(258) 00:10:38.822 fused_ordering(259) 00:10:38.822 fused_ordering(260) 00:10:38.822 fused_ordering(261) 00:10:38.822 fused_ordering(262) 00:10:38.822 fused_ordering(263) 00:10:38.822 fused_ordering(264) 00:10:38.822 fused_ordering(265) 00:10:38.822 fused_ordering(266) 00:10:38.822 fused_ordering(267) 00:10:38.822 fused_ordering(268) 00:10:38.822 fused_ordering(269) 00:10:38.822 fused_ordering(270) 00:10:38.822 fused_ordering(271) 00:10:38.822 fused_ordering(272) 00:10:38.822 fused_ordering(273) 00:10:38.822 fused_ordering(274) 00:10:38.822 fused_ordering(275) 00:10:38.822 fused_ordering(276) 00:10:38.822 fused_ordering(277) 00:10:38.822 fused_ordering(278) 00:10:38.822 fused_ordering(279) 00:10:38.822 fused_ordering(280) 00:10:38.822 fused_ordering(281) 00:10:38.822 fused_ordering(282) 00:10:38.822 fused_ordering(283) 00:10:38.822 fused_ordering(284) 00:10:38.822 fused_ordering(285) 00:10:38.822 fused_ordering(286) 00:10:38.822 fused_ordering(287) 00:10:38.822 fused_ordering(288) 00:10:38.822 fused_ordering(289) 00:10:38.822 fused_ordering(290) 00:10:38.822 fused_ordering(291) 00:10:38.822 fused_ordering(292) 00:10:38.822 fused_ordering(293) 00:10:38.822 fused_ordering(294) 00:10:38.822 fused_ordering(295) 00:10:38.822 fused_ordering(296) 00:10:38.822 fused_ordering(297) 00:10:38.822 fused_ordering(298) 00:10:38.822 fused_ordering(299) 00:10:38.822 fused_ordering(300) 00:10:38.822 fused_ordering(301) 00:10:38.822 fused_ordering(302) 00:10:38.822 fused_ordering(303) 00:10:38.822 fused_ordering(304) 00:10:38.822 fused_ordering(305) 00:10:38.822 fused_ordering(306) 00:10:38.822 fused_ordering(307) 00:10:38.822 fused_ordering(308) 00:10:38.822 fused_ordering(309) 00:10:38.822 fused_ordering(310) 00:10:38.822 fused_ordering(311) 00:10:38.822 fused_ordering(312) 00:10:38.822 fused_ordering(313) 00:10:38.822 fused_ordering(314) 00:10:38.822 fused_ordering(315) 00:10:38.822 fused_ordering(316) 00:10:38.822 fused_ordering(317) 00:10:38.822 fused_ordering(318) 00:10:38.822 fused_ordering(319) 00:10:38.822 fused_ordering(320) 00:10:38.822 fused_ordering(321) 00:10:38.822 fused_ordering(322) 00:10:38.822 fused_ordering(323) 00:10:38.822 fused_ordering(324) 00:10:38.822 fused_ordering(325) 00:10:38.822 fused_ordering(326) 00:10:38.822 fused_ordering(327) 00:10:38.822 fused_ordering(328) 00:10:38.822 fused_ordering(329) 00:10:38.822 fused_ordering(330) 00:10:38.822 fused_ordering(331) 00:10:38.822 fused_ordering(332) 00:10:38.822 fused_ordering(333) 00:10:38.822 fused_ordering(334) 00:10:38.822 fused_ordering(335) 00:10:38.822 fused_ordering(336) 00:10:38.822 fused_ordering(337) 00:10:38.822 fused_ordering(338) 00:10:38.822 fused_ordering(339) 00:10:38.822 fused_ordering(340) 00:10:38.822 fused_ordering(341) 00:10:38.822 fused_ordering(342) 00:10:38.822 fused_ordering(343) 00:10:38.822 fused_ordering(344) 00:10:38.822 fused_ordering(345) 00:10:38.822 fused_ordering(346) 00:10:38.822 fused_ordering(347) 00:10:38.822 fused_ordering(348) 00:10:38.822 fused_ordering(349) 00:10:38.822 fused_ordering(350) 00:10:38.822 fused_ordering(351) 00:10:38.822 fused_ordering(352) 00:10:38.822 fused_ordering(353) 00:10:38.822 fused_ordering(354) 00:10:38.822 fused_ordering(355) 00:10:38.822 fused_ordering(356) 00:10:38.822 fused_ordering(357) 00:10:38.822 fused_ordering(358) 00:10:38.822 fused_ordering(359) 00:10:38.822 fused_ordering(360) 00:10:38.822 fused_ordering(361) 00:10:38.822 fused_ordering(362) 00:10:38.822 fused_ordering(363) 00:10:38.822 fused_ordering(364) 00:10:38.822 fused_ordering(365) 00:10:38.822 fused_ordering(366) 00:10:38.822 fused_ordering(367) 00:10:38.822 fused_ordering(368) 00:10:38.822 fused_ordering(369) 00:10:38.822 fused_ordering(370) 00:10:38.822 fused_ordering(371) 00:10:38.822 fused_ordering(372) 00:10:38.822 fused_ordering(373) 00:10:38.822 fused_ordering(374) 00:10:38.822 fused_ordering(375) 00:10:38.822 fused_ordering(376) 00:10:38.822 fused_ordering(377) 00:10:38.822 fused_ordering(378) 00:10:38.822 fused_ordering(379) 00:10:38.822 fused_ordering(380) 00:10:38.822 fused_ordering(381) 00:10:38.822 fused_ordering(382) 00:10:38.822 fused_ordering(383) 00:10:38.822 fused_ordering(384) 00:10:38.822 fused_ordering(385) 00:10:38.822 fused_ordering(386) 00:10:38.822 fused_ordering(387) 00:10:38.822 fused_ordering(388) 00:10:38.822 fused_ordering(389) 00:10:38.822 fused_ordering(390) 00:10:38.822 fused_ordering(391) 00:10:38.822 fused_ordering(392) 00:10:38.822 fused_ordering(393) 00:10:38.822 fused_ordering(394) 00:10:38.822 fused_ordering(395) 00:10:38.822 fused_ordering(396) 00:10:38.823 fused_ordering(397) 00:10:38.823 fused_ordering(398) 00:10:38.823 fused_ordering(399) 00:10:38.823 fused_ordering(400) 00:10:38.823 fused_ordering(401) 00:10:38.823 fused_ordering(402) 00:10:38.823 fused_ordering(403) 00:10:38.823 fused_ordering(404) 00:10:38.823 fused_ordering(405) 00:10:38.823 fused_ordering(406) 00:10:38.823 fused_ordering(407) 00:10:38.823 fused_ordering(408) 00:10:38.823 fused_ordering(409) 00:10:38.823 fused_ordering(410) 00:10:39.082 fused_ordering(411) 00:10:39.082 fused_ordering(412) 00:10:39.082 fused_ordering(413) 00:10:39.082 fused_ordering(414) 00:10:39.082 fused_ordering(415) 00:10:39.082 fused_ordering(416) 00:10:39.082 fused_ordering(417) 00:10:39.082 fused_ordering(418) 00:10:39.082 fused_ordering(419) 00:10:39.082 fused_ordering(420) 00:10:39.082 fused_ordering(421) 00:10:39.082 fused_ordering(422) 00:10:39.082 fused_ordering(423) 00:10:39.082 fused_ordering(424) 00:10:39.082 fused_ordering(425) 00:10:39.082 fused_ordering(426) 00:10:39.082 fused_ordering(427) 00:10:39.082 fused_ordering(428) 00:10:39.082 fused_ordering(429) 00:10:39.082 fused_ordering(430) 00:10:39.082 fused_ordering(431) 00:10:39.082 fused_ordering(432) 00:10:39.082 fused_ordering(433) 00:10:39.082 fused_ordering(434) 00:10:39.082 fused_ordering(435) 00:10:39.082 fused_ordering(436) 00:10:39.082 fused_ordering(437) 00:10:39.082 fused_ordering(438) 00:10:39.082 fused_ordering(439) 00:10:39.082 fused_ordering(440) 00:10:39.082 fused_ordering(441) 00:10:39.082 fused_ordering(442) 00:10:39.082 fused_ordering(443) 00:10:39.082 fused_ordering(444) 00:10:39.082 fused_ordering(445) 00:10:39.082 fused_ordering(446) 00:10:39.082 fused_ordering(447) 00:10:39.082 fused_ordering(448) 00:10:39.082 fused_ordering(449) 00:10:39.082 fused_ordering(450) 00:10:39.082 fused_ordering(451) 00:10:39.082 fused_ordering(452) 00:10:39.082 fused_ordering(453) 00:10:39.082 fused_ordering(454) 00:10:39.082 fused_ordering(455) 00:10:39.082 fused_ordering(456) 00:10:39.082 fused_ordering(457) 00:10:39.082 fused_ordering(458) 00:10:39.082 fused_ordering(459) 00:10:39.082 fused_ordering(460) 00:10:39.082 fused_ordering(461) 00:10:39.082 fused_ordering(462) 00:10:39.082 fused_ordering(463) 00:10:39.082 fused_ordering(464) 00:10:39.082 fused_ordering(465) 00:10:39.082 fused_ordering(466) 00:10:39.082 fused_ordering(467) 00:10:39.082 fused_ordering(468) 00:10:39.082 fused_ordering(469) 00:10:39.082 fused_ordering(470) 00:10:39.082 fused_ordering(471) 00:10:39.082 fused_ordering(472) 00:10:39.082 fused_ordering(473) 00:10:39.082 fused_ordering(474) 00:10:39.082 fused_ordering(475) 00:10:39.082 fused_ordering(476) 00:10:39.082 fused_ordering(477) 00:10:39.082 fused_ordering(478) 00:10:39.082 fused_ordering(479) 00:10:39.082 fused_ordering(480) 00:10:39.082 fused_ordering(481) 00:10:39.082 fused_ordering(482) 00:10:39.082 fused_ordering(483) 00:10:39.082 fused_ordering(484) 00:10:39.082 fused_ordering(485) 00:10:39.082 fused_ordering(486) 00:10:39.082 fused_ordering(487) 00:10:39.082 fused_ordering(488) 00:10:39.082 fused_ordering(489) 00:10:39.082 fused_ordering(490) 00:10:39.082 fused_ordering(491) 00:10:39.082 fused_ordering(492) 00:10:39.082 fused_ordering(493) 00:10:39.082 fused_ordering(494) 00:10:39.082 fused_ordering(495) 00:10:39.082 fused_ordering(496) 00:10:39.082 fused_ordering(497) 00:10:39.082 fused_ordering(498) 00:10:39.082 fused_ordering(499) 00:10:39.082 fused_ordering(500) 00:10:39.082 fused_ordering(501) 00:10:39.082 fused_ordering(502) 00:10:39.082 fused_ordering(503) 00:10:39.082 fused_ordering(504) 00:10:39.082 fused_ordering(505) 00:10:39.082 fused_ordering(506) 00:10:39.082 fused_ordering(507) 00:10:39.082 fused_ordering(508) 00:10:39.082 fused_ordering(509) 00:10:39.082 fused_ordering(510) 00:10:39.082 fused_ordering(511) 00:10:39.082 fused_ordering(512) 00:10:39.082 fused_ordering(513) 00:10:39.082 fused_ordering(514) 00:10:39.082 fused_ordering(515) 00:10:39.082 fused_ordering(516) 00:10:39.082 fused_ordering(517) 00:10:39.082 fused_ordering(518) 00:10:39.082 fused_ordering(519) 00:10:39.082 fused_ordering(520) 00:10:39.082 fused_ordering(521) 00:10:39.082 fused_ordering(522) 00:10:39.082 fused_ordering(523) 00:10:39.082 fused_ordering(524) 00:10:39.082 fused_ordering(525) 00:10:39.082 fused_ordering(526) 00:10:39.082 fused_ordering(527) 00:10:39.082 fused_ordering(528) 00:10:39.082 fused_ordering(529) 00:10:39.082 fused_ordering(530) 00:10:39.082 fused_ordering(531) 00:10:39.082 fused_ordering(532) 00:10:39.082 fused_ordering(533) 00:10:39.082 fused_ordering(534) 00:10:39.082 fused_ordering(535) 00:10:39.082 fused_ordering(536) 00:10:39.082 fused_ordering(537) 00:10:39.082 fused_ordering(538) 00:10:39.082 fused_ordering(539) 00:10:39.082 fused_ordering(540) 00:10:39.082 fused_ordering(541) 00:10:39.082 fused_ordering(542) 00:10:39.082 fused_ordering(543) 00:10:39.082 fused_ordering(544) 00:10:39.082 fused_ordering(545) 00:10:39.082 fused_ordering(546) 00:10:39.082 fused_ordering(547) 00:10:39.082 fused_ordering(548) 00:10:39.082 fused_ordering(549) 00:10:39.082 fused_ordering(550) 00:10:39.082 fused_ordering(551) 00:10:39.082 fused_ordering(552) 00:10:39.082 fused_ordering(553) 00:10:39.082 fused_ordering(554) 00:10:39.082 fused_ordering(555) 00:10:39.082 fused_ordering(556) 00:10:39.082 fused_ordering(557) 00:10:39.082 fused_ordering(558) 00:10:39.082 fused_ordering(559) 00:10:39.082 fused_ordering(560) 00:10:39.082 fused_ordering(561) 00:10:39.082 fused_ordering(562) 00:10:39.083 fused_ordering(563) 00:10:39.083 fused_ordering(564) 00:10:39.083 fused_ordering(565) 00:10:39.083 fused_ordering(566) 00:10:39.083 fused_ordering(567) 00:10:39.083 fused_ordering(568) 00:10:39.083 fused_ordering(569) 00:10:39.083 fused_ordering(570) 00:10:39.083 fused_ordering(571) 00:10:39.083 fused_ordering(572) 00:10:39.083 fused_ordering(573) 00:10:39.083 fused_ordering(574) 00:10:39.083 fused_ordering(575) 00:10:39.083 fused_ordering(576) 00:10:39.083 fused_ordering(577) 00:10:39.083 fused_ordering(578) 00:10:39.083 fused_ordering(579) 00:10:39.083 fused_ordering(580) 00:10:39.083 fused_ordering(581) 00:10:39.083 fused_ordering(582) 00:10:39.083 fused_ordering(583) 00:10:39.083 fused_ordering(584) 00:10:39.083 fused_ordering(585) 00:10:39.083 fused_ordering(586) 00:10:39.083 fused_ordering(587) 00:10:39.083 fused_ordering(588) 00:10:39.083 fused_ordering(589) 00:10:39.083 fused_ordering(590) 00:10:39.083 fused_ordering(591) 00:10:39.083 fused_ordering(592) 00:10:39.083 fused_ordering(593) 00:10:39.083 fused_ordering(594) 00:10:39.083 fused_ordering(595) 00:10:39.083 fused_ordering(596) 00:10:39.083 fused_ordering(597) 00:10:39.083 fused_ordering(598) 00:10:39.083 fused_ordering(599) 00:10:39.083 fused_ordering(600) 00:10:39.083 fused_ordering(601) 00:10:39.083 fused_ordering(602) 00:10:39.083 fused_ordering(603) 00:10:39.083 fused_ordering(604) 00:10:39.083 fused_ordering(605) 00:10:39.083 fused_ordering(606) 00:10:39.083 fused_ordering(607) 00:10:39.083 fused_ordering(608) 00:10:39.083 fused_ordering(609) 00:10:39.083 fused_ordering(610) 00:10:39.083 fused_ordering(611) 00:10:39.083 fused_ordering(612) 00:10:39.083 fused_ordering(613) 00:10:39.083 fused_ordering(614) 00:10:39.083 fused_ordering(615) 00:10:39.083 fused_ordering(616) 00:10:39.083 fused_ordering(617) 00:10:39.083 fused_ordering(618) 00:10:39.083 fused_ordering(619) 00:10:39.083 fused_ordering(620) 00:10:39.083 fused_ordering(621) 00:10:39.083 fused_ordering(622) 00:10:39.083 fused_ordering(623) 00:10:39.083 fused_ordering(624) 00:10:39.083 fused_ordering(625) 00:10:39.083 fused_ordering(626) 00:10:39.083 fused_ordering(627) 00:10:39.083 fused_ordering(628) 00:10:39.083 fused_ordering(629) 00:10:39.083 fused_ordering(630) 00:10:39.083 fused_ordering(631) 00:10:39.083 fused_ordering(632) 00:10:39.083 fused_ordering(633) 00:10:39.083 fused_ordering(634) 00:10:39.083 fused_ordering(635) 00:10:39.083 fused_ordering(636) 00:10:39.083 fused_ordering(637) 00:10:39.083 fused_ordering(638) 00:10:39.083 fused_ordering(639) 00:10:39.083 fused_ordering(640) 00:10:39.083 fused_ordering(641) 00:10:39.083 fused_ordering(642) 00:10:39.083 fused_ordering(643) 00:10:39.083 fused_ordering(644) 00:10:39.083 fused_ordering(645) 00:10:39.083 fused_ordering(646) 00:10:39.083 fused_ordering(647) 00:10:39.083 fused_ordering(648) 00:10:39.083 fused_ordering(649) 00:10:39.083 fused_ordering(650) 00:10:39.083 fused_ordering(651) 00:10:39.083 fused_ordering(652) 00:10:39.083 fused_ordering(653) 00:10:39.083 fused_ordering(654) 00:10:39.083 fused_ordering(655) 00:10:39.083 fused_ordering(656) 00:10:39.083 fused_ordering(657) 00:10:39.083 fused_ordering(658) 00:10:39.083 fused_ordering(659) 00:10:39.083 fused_ordering(660) 00:10:39.083 fused_ordering(661) 00:10:39.083 fused_ordering(662) 00:10:39.083 fused_ordering(663) 00:10:39.083 fused_ordering(664) 00:10:39.083 fused_ordering(665) 00:10:39.083 fused_ordering(666) 00:10:39.083 fused_ordering(667) 00:10:39.083 fused_ordering(668) 00:10:39.083 fused_ordering(669) 00:10:39.083 fused_ordering(670) 00:10:39.083 fused_ordering(671) 00:10:39.083 fused_ordering(672) 00:10:39.083 fused_ordering(673) 00:10:39.083 fused_ordering(674) 00:10:39.083 fused_ordering(675) 00:10:39.083 fused_ordering(676) 00:10:39.083 fused_ordering(677) 00:10:39.083 fused_ordering(678) 00:10:39.083 fused_ordering(679) 00:10:39.083 fused_ordering(680) 00:10:39.083 fused_ordering(681) 00:10:39.083 fused_ordering(682) 00:10:39.083 fused_ordering(683) 00:10:39.083 fused_ordering(684) 00:10:39.083 fused_ordering(685) 00:10:39.083 fused_ordering(686) 00:10:39.083 fused_ordering(687) 00:10:39.083 fused_ordering(688) 00:10:39.083 fused_ordering(689) 00:10:39.083 fused_ordering(690) 00:10:39.083 fused_ordering(691) 00:10:39.083 fused_ordering(692) 00:10:39.083 fused_ordering(693) 00:10:39.083 fused_ordering(694) 00:10:39.083 fused_ordering(695) 00:10:39.083 fused_ordering(696) 00:10:39.083 fused_ordering(697) 00:10:39.083 fused_ordering(698) 00:10:39.083 fused_ordering(699) 00:10:39.083 fused_ordering(700) 00:10:39.083 fused_ordering(701) 00:10:39.083 fused_ordering(702) 00:10:39.083 fused_ordering(703) 00:10:39.083 fused_ordering(704) 00:10:39.083 fused_ordering(705) 00:10:39.083 fused_ordering(706) 00:10:39.083 fused_ordering(707) 00:10:39.083 fused_ordering(708) 00:10:39.083 fused_ordering(709) 00:10:39.083 fused_ordering(710) 00:10:39.083 fused_ordering(711) 00:10:39.083 fused_ordering(712) 00:10:39.083 fused_ordering(713) 00:10:39.083 fused_ordering(714) 00:10:39.083 fused_ordering(715) 00:10:39.083 fused_ordering(716) 00:10:39.083 fused_ordering(717) 00:10:39.083 fused_ordering(718) 00:10:39.083 fused_ordering(719) 00:10:39.083 fused_ordering(720) 00:10:39.083 fused_ordering(721) 00:10:39.083 fused_ordering(722) 00:10:39.083 fused_ordering(723) 00:10:39.083 fused_ordering(724) 00:10:39.083 fused_ordering(725) 00:10:39.083 fused_ordering(726) 00:10:39.083 fused_ordering(727) 00:10:39.083 fused_ordering(728) 00:10:39.083 fused_ordering(729) 00:10:39.083 fused_ordering(730) 00:10:39.083 fused_ordering(731) 00:10:39.083 fused_ordering(732) 00:10:39.083 fused_ordering(733) 00:10:39.083 fused_ordering(734) 00:10:39.083 fused_ordering(735) 00:10:39.083 fused_ordering(736) 00:10:39.083 fused_ordering(737) 00:10:39.083 fused_ordering(738) 00:10:39.083 fused_ordering(739) 00:10:39.083 fused_ordering(740) 00:10:39.083 fused_ordering(741) 00:10:39.083 fused_ordering(742) 00:10:39.083 fused_ordering(743) 00:10:39.083 fused_ordering(744) 00:10:39.083 fused_ordering(745) 00:10:39.083 fused_ordering(746) 00:10:39.083 fused_ordering(747) 00:10:39.083 fused_ordering(748) 00:10:39.083 fused_ordering(749) 00:10:39.083 fused_ordering(750) 00:10:39.083 fused_ordering(751) 00:10:39.083 fused_ordering(752) 00:10:39.083 fused_ordering(753) 00:10:39.083 fused_ordering(754) 00:10:39.083 fused_ordering(755) 00:10:39.083 fused_ordering(756) 00:10:39.083 fused_ordering(757) 00:10:39.083 fused_ordering(758) 00:10:39.083 fused_ordering(759) 00:10:39.083 fused_ordering(760) 00:10:39.083 fused_ordering(761) 00:10:39.083 fused_ordering(762) 00:10:39.083 fused_ordering(763) 00:10:39.083 fused_ordering(764) 00:10:39.083 fused_ordering(765) 00:10:39.083 fused_ordering(766) 00:10:39.083 fused_ordering(767) 00:10:39.083 fused_ordering(768) 00:10:39.083 fused_ordering(769) 00:10:39.083 fused_ordering(770) 00:10:39.083 fused_ordering(771) 00:10:39.083 fused_ordering(772) 00:10:39.083 fused_ordering(773) 00:10:39.083 fused_ordering(774) 00:10:39.083 fused_ordering(775) 00:10:39.083 fused_ordering(776) 00:10:39.083 fused_ordering(777) 00:10:39.083 fused_ordering(778) 00:10:39.083 fused_ordering(779) 00:10:39.083 fused_ordering(780) 00:10:39.083 fused_ordering(781) 00:10:39.083 fused_ordering(782) 00:10:39.083 fused_ordering(783) 00:10:39.083 fused_ordering(784) 00:10:39.083 fused_ordering(785) 00:10:39.083 fused_ordering(786) 00:10:39.083 fused_ordering(787) 00:10:39.083 fused_ordering(788) 00:10:39.083 fused_ordering(789) 00:10:39.083 fused_ordering(790) 00:10:39.083 fused_ordering(791) 00:10:39.083 fused_ordering(792) 00:10:39.083 fused_ordering(793) 00:10:39.083 fused_ordering(794) 00:10:39.083 fused_ordering(795) 00:10:39.083 fused_ordering(796) 00:10:39.083 fused_ordering(797) 00:10:39.083 fused_ordering(798) 00:10:39.083 fused_ordering(799) 00:10:39.083 fused_ordering(800) 00:10:39.083 fused_ordering(801) 00:10:39.083 fused_ordering(802) 00:10:39.083 fused_ordering(803) 00:10:39.083 fused_ordering(804) 00:10:39.083 fused_ordering(805) 00:10:39.083 fused_ordering(806) 00:10:39.083 fused_ordering(807) 00:10:39.083 fused_ordering(808) 00:10:39.083 fused_ordering(809) 00:10:39.083 fused_ordering(810) 00:10:39.083 fused_ordering(811) 00:10:39.083 fused_ordering(812) 00:10:39.083 fused_ordering(813) 00:10:39.083 fused_ordering(814) 00:10:39.083 fused_ordering(815) 00:10:39.083 fused_ordering(816) 00:10:39.083 fused_ordering(817) 00:10:39.083 fused_ordering(818) 00:10:39.083 fused_ordering(819) 00:10:39.083 fused_ordering(820) 00:10:39.343 fused_ordering(821) 00:10:39.343 fused_ordering(822) 00:10:39.343 fused_ordering(823) 00:10:39.343 fused_ordering(824) 00:10:39.343 fused_ordering(825) 00:10:39.343 fused_ordering(826) 00:10:39.343 fused_ordering(827) 00:10:39.343 fused_ordering(828) 00:10:39.343 fused_ordering(829) 00:10:39.343 fused_ordering(830) 00:10:39.343 fused_ordering(831) 00:10:39.343 fused_ordering(832) 00:10:39.343 fused_ordering(833) 00:10:39.343 fused_ordering(834) 00:10:39.343 fused_ordering(835) 00:10:39.343 fused_ordering(836) 00:10:39.343 fused_ordering(837) 00:10:39.343 fused_ordering(838) 00:10:39.343 fused_ordering(839) 00:10:39.343 fused_ordering(840) 00:10:39.343 fused_ordering(841) 00:10:39.343 fused_ordering(842) 00:10:39.343 fused_ordering(843) 00:10:39.343 fused_ordering(844) 00:10:39.343 fused_ordering(845) 00:10:39.343 fused_ordering(846) 00:10:39.343 fused_ordering(847) 00:10:39.343 fused_ordering(848) 00:10:39.343 fused_ordering(849) 00:10:39.343 fused_ordering(850) 00:10:39.343 fused_ordering(851) 00:10:39.343 fused_ordering(852) 00:10:39.343 fused_ordering(853) 00:10:39.343 fused_ordering(854) 00:10:39.343 fused_ordering(855) 00:10:39.343 fused_ordering(856) 00:10:39.343 fused_ordering(857) 00:10:39.343 fused_ordering(858) 00:10:39.343 fused_ordering(859) 00:10:39.343 fused_ordering(860) 00:10:39.343 fused_ordering(861) 00:10:39.343 fused_ordering(862) 00:10:39.343 fused_ordering(863) 00:10:39.343 fused_ordering(864) 00:10:39.343 fused_ordering(865) 00:10:39.343 fused_ordering(866) 00:10:39.343 fused_ordering(867) 00:10:39.343 fused_ordering(868) 00:10:39.343 fused_ordering(869) 00:10:39.343 fused_ordering(870) 00:10:39.343 fused_ordering(871) 00:10:39.343 fused_ordering(872) 00:10:39.343 fused_ordering(873) 00:10:39.343 fused_ordering(874) 00:10:39.343 fused_ordering(875) 00:10:39.343 fused_ordering(876) 00:10:39.343 fused_ordering(877) 00:10:39.343 fused_ordering(878) 00:10:39.343 fused_ordering(879) 00:10:39.343 fused_ordering(880) 00:10:39.343 fused_ordering(881) 00:10:39.343 fused_ordering(882) 00:10:39.343 fused_ordering(883) 00:10:39.343 fused_ordering(884) 00:10:39.343 fused_ordering(885) 00:10:39.343 fused_ordering(886) 00:10:39.343 fused_ordering(887) 00:10:39.343 fused_ordering(888) 00:10:39.343 fused_ordering(889) 00:10:39.343 fused_ordering(890) 00:10:39.343 fused_ordering(891) 00:10:39.343 fused_ordering(892) 00:10:39.343 fused_ordering(893) 00:10:39.343 fused_ordering(894) 00:10:39.343 fused_ordering(895) 00:10:39.343 fused_ordering(896) 00:10:39.343 fused_ordering(897) 00:10:39.343 fused_ordering(898) 00:10:39.343 fused_ordering(899) 00:10:39.343 fused_ordering(900) 00:10:39.343 fused_ordering(901) 00:10:39.343 fused_ordering(902) 00:10:39.343 fused_ordering(903) 00:10:39.343 fused_ordering(904) 00:10:39.343 fused_ordering(905) 00:10:39.343 fused_ordering(906) 00:10:39.343 fused_ordering(907) 00:10:39.343 fused_ordering(908) 00:10:39.343 fused_ordering(909) 00:10:39.343 fused_ordering(910) 00:10:39.343 fused_ordering(911) 00:10:39.343 fused_ordering(912) 00:10:39.343 fused_ordering(913) 00:10:39.343 fused_ordering(914) 00:10:39.343 fused_ordering(915) 00:10:39.343 fused_ordering(916) 00:10:39.343 fused_ordering(917) 00:10:39.343 fused_ordering(918) 00:10:39.343 fused_ordering(919) 00:10:39.343 fused_ordering(920) 00:10:39.343 fused_ordering(921) 00:10:39.343 fused_ordering(922) 00:10:39.343 fused_ordering(923) 00:10:39.343 fused_ordering(924) 00:10:39.343 fused_ordering(925) 00:10:39.343 fused_ordering(926) 00:10:39.343 fused_ordering(927) 00:10:39.343 fused_ordering(928) 00:10:39.343 fused_ordering(929) 00:10:39.343 fused_ordering(930) 00:10:39.343 fused_ordering(931) 00:10:39.343 fused_ordering(932) 00:10:39.343 fused_ordering(933) 00:10:39.343 fused_ordering(934) 00:10:39.343 fused_ordering(935) 00:10:39.343 fused_ordering(936) 00:10:39.343 fused_ordering(937) 00:10:39.343 fused_ordering(938) 00:10:39.343 fused_ordering(939) 00:10:39.343 fused_ordering(940) 00:10:39.343 fused_ordering(941) 00:10:39.343 fused_ordering(942) 00:10:39.343 fused_ordering(943) 00:10:39.343 fused_ordering(944) 00:10:39.343 fused_ordering(945) 00:10:39.343 fused_ordering(946) 00:10:39.343 fused_ordering(947) 00:10:39.343 fused_ordering(948) 00:10:39.343 fused_ordering(949) 00:10:39.343 fused_ordering(950) 00:10:39.343 fused_ordering(951) 00:10:39.343 fused_ordering(952) 00:10:39.343 fused_ordering(953) 00:10:39.343 fused_ordering(954) 00:10:39.343 fused_ordering(955) 00:10:39.343 fused_ordering(956) 00:10:39.343 fused_ordering(957) 00:10:39.343 fused_ordering(958) 00:10:39.343 fused_ordering(959) 00:10:39.343 fused_ordering(960) 00:10:39.343 fused_ordering(961) 00:10:39.343 fused_ordering(962) 00:10:39.343 fused_ordering(963) 00:10:39.343 fused_ordering(964) 00:10:39.343 fused_ordering(965) 00:10:39.343 fused_ordering(966) 00:10:39.343 fused_ordering(967) 00:10:39.343 fused_ordering(968) 00:10:39.343 fused_ordering(969) 00:10:39.343 fused_ordering(970) 00:10:39.343 fused_ordering(971) 00:10:39.343 fused_ordering(972) 00:10:39.343 fused_ordering(973) 00:10:39.343 fused_ordering(974) 00:10:39.343 fused_ordering(975) 00:10:39.343 fused_ordering(976) 00:10:39.343 fused_ordering(977) 00:10:39.343 fused_ordering(978) 00:10:39.343 fused_ordering(979) 00:10:39.343 fused_ordering(980) 00:10:39.343 fused_ordering(981) 00:10:39.343 fused_ordering(982) 00:10:39.343 fused_ordering(983) 00:10:39.343 fused_ordering(984) 00:10:39.343 fused_ordering(985) 00:10:39.343 fused_ordering(986) 00:10:39.343 fused_ordering(987) 00:10:39.343 fused_ordering(988) 00:10:39.343 fused_ordering(989) 00:10:39.343 fused_ordering(990) 00:10:39.343 fused_ordering(991) 00:10:39.343 fused_ordering(992) 00:10:39.343 fused_ordering(993) 00:10:39.343 fused_ordering(994) 00:10:39.343 fused_ordering(995) 00:10:39.343 fused_ordering(996) 00:10:39.343 fused_ordering(997) 00:10:39.343 fused_ordering(998) 00:10:39.343 fused_ordering(999) 00:10:39.343 fused_ordering(1000) 00:10:39.343 fused_ordering(1001) 00:10:39.343 fused_ordering(1002) 00:10:39.343 fused_ordering(1003) 00:10:39.343 fused_ordering(1004) 00:10:39.343 fused_ordering(1005) 00:10:39.343 fused_ordering(1006) 00:10:39.343 fused_ordering(1007) 00:10:39.343 fused_ordering(1008) 00:10:39.343 fused_ordering(1009) 00:10:39.343 fused_ordering(1010) 00:10:39.343 fused_ordering(1011) 00:10:39.343 fused_ordering(1012) 00:10:39.343 fused_ordering(1013) 00:10:39.343 fused_ordering(1014) 00:10:39.343 fused_ordering(1015) 00:10:39.343 fused_ordering(1016) 00:10:39.343 fused_ordering(1017) 00:10:39.343 fused_ordering(1018) 00:10:39.343 fused_ordering(1019) 00:10:39.343 fused_ordering(1020) 00:10:39.343 fused_ordering(1021) 00:10:39.343 fused_ordering(1022) 00:10:39.343 fused_ordering(1023) 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:39.343 rmmod nvme_rdma 00:10:39.343 rmmod nvme_fabrics 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 831440 ']' 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 831440 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 831440 ']' 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 831440 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 831440 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 831440' 00:10:39.343 killing process with pid 831440 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 831440 00:10:39.343 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 831440 00:10:39.603 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.603 23:01:31 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:39.603 00:10:39.603 real 0m8.180s 00:10:39.603 user 0m4.455s 00:10:39.603 sys 0m4.945s 00:10:39.603 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:39.603 23:01:31 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.603 ************************************ 00:10:39.603 END TEST nvmf_fused_ordering 00:10:39.603 ************************************ 00:10:39.603 23:01:31 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:39.603 23:01:31 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:39.603 23:01:31 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:39.603 23:01:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:39.603 ************************************ 00:10:39.603 START TEST nvmf_delete_subsystem 00:10:39.603 ************************************ 00:10:39.603 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:39.863 * Looking for test storage... 00:10:39.863 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.863 23:01:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:46.434 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:46.434 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:46.434 Found net devices under 0000:da:00.0: mlx_0_0 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:46.434 Found net devices under 0000:da:00.1: mlx_0_1 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:46.434 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:46.435 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.435 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:46.435 altname enp218s0f0np0 00:10:46.435 altname ens818f0np0 00:10:46.435 inet 192.168.100.8/24 scope global mlx_0_0 00:10:46.435 valid_lft forever preferred_lft forever 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:46.435 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.435 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:46.435 altname enp218s0f1np1 00:10:46.435 altname ens818f1np1 00:10:46.435 inet 192.168.100.9/24 scope global mlx_0_1 00:10:46.435 valid_lft forever preferred_lft forever 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:46.435 192.168.100.9' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:46.435 192.168.100.9' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:46.435 192.168.100.9' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=835267 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 835267 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 835267 ']' 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.435 23:01:38 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:46.435 [2024-06-07 23:01:38.467469] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:10:46.435 [2024-06-07 23:01:38.467522] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.435 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.435 [2024-06-07 23:01:38.527925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.435 [2024-06-07 23:01:38.604744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.435 [2024-06-07 23:01:38.604783] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.436 [2024-06-07 23:01:38.604790] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.436 [2024-06-07 23:01:38.604795] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.436 [2024-06-07 23:01:38.604800] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.436 [2024-06-07 23:01:38.604845] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.436 [2024-06-07 23:01:38.604849] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.003 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:47.003 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:10:47.003 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:47.003 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:47.003 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.262 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.263 [2024-06-07 23:01:39.324552] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2362360/0x2366850) succeed. 00:10:47.263 [2024-06-07 23:01:39.333402] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2363860/0x23a7ee0) succeed. 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.263 [2024-06-07 23:01:39.412393] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.263 NULL1 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.263 Delay0 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=835514 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:47.263 23:01:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:47.263 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.263 [2024-06-07 23:01:39.509705] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:49.791 23:01:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.791 23:01:41 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.791 23:01:41 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 NVMe io qpair process completion error 00:10:50.356 23:01:42 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.356 23:01:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:50.356 23:01:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 835514 00:10:50.356 23:01:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:50.923 23:01:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:50.923 23:01:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 835514 00:10:50.923 23:01:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Write completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.489 starting I/O failed: -6 00:10:51.489 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 starting I/O failed: -6 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Write completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.490 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Write completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Read completed with error (sct=0, sc=8) 00:10:51.491 Initializing NVMe Controllers 00:10:51.491 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.491 Controller IO queue size 128, less than required. 00:10:51.491 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:51.491 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:51.491 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:51.491 Initialization complete. Launching workers. 00:10:51.491 ======================================================== 00:10:51.491 Latency(us) 00:10:51.491 Device Information : IOPS MiB/s Average min max 00:10:51.491 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.47 0.04 1593987.28 1000090.71 2977174.64 00:10:51.491 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.47 0.04 1595378.31 1001106.85 2978065.83 00:10:51.491 ======================================================== 00:10:51.491 Total : 160.95 0.08 1594682.80 1000090.71 2978065.83 00:10:51.491 00:10:51.491 23:01:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:51.491 23:01:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 835514 00:10:51.491 23:01:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:51.491 [2024-06-07 23:01:43.608201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:51.491 [2024-06-07 23:01:43.608241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:51.491 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 835514 00:10:52.058 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (835514) - No such process 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 835514 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 835514 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 835514 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.058 [2024-06-07 23:01:44.122552] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=836209 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.058 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:52.058 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.058 [2024-06-07 23:01:44.203370] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:52.623 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.623 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:52.623 23:01:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.880 23:01:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.880 23:01:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:52.880 23:01:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.445 23:01:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.445 23:01:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:53.445 23:01:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.010 23:01:46 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.010 23:01:46 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:54.010 23:01:46 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.575 23:01:46 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.575 23:01:46 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:54.575 23:01:46 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.189 23:01:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.189 23:01:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:55.189 23:01:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.489 23:01:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.489 23:01:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:55.489 23:01:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.056 23:01:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.056 23:01:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:56.056 23:01:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.623 23:01:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.623 23:01:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:56.623 23:01:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.190 23:01:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:57.190 23:01:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:57.190 23:01:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.448 23:01:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:57.448 23:01:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:57.448 23:01:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:58.014 23:01:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:58.014 23:01:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:58.014 23:01:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:58.580 23:01:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:58.580 23:01:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:58.580 23:01:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:59.146 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:59.146 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:59.146 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:59.146 Initializing NVMe Controllers 00:10:59.146 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.146 Controller IO queue size 128, less than required. 00:10:59.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:59.146 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:59.146 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:59.146 Initialization complete. Launching workers. 00:10:59.146 ======================================================== 00:10:59.146 Latency(us) 00:10:59.146 Device Information : IOPS MiB/s Average min max 00:10:59.146 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001230.99 1000051.91 1003705.63 00:10:59.146 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002438.34 1000107.35 1006100.87 00:10:59.146 ======================================================== 00:10:59.146 Total : 256.00 0.12 1001834.67 1000051.91 1006100.87 00:10:59.146 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 836209 00:10:59.714 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (836209) - No such process 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 836209 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:59.714 rmmod nvme_rdma 00:10:59.714 rmmod nvme_fabrics 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 835267 ']' 00:10:59.714 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 835267 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 835267 ']' 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 835267 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 835267 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 835267' 00:10:59.715 killing process with pid 835267 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 835267 00:10:59.715 23:01:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 835267 00:10:59.973 23:01:52 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.973 23:01:52 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:59.973 00:10:59.973 real 0m20.202s 00:10:59.973 user 0m50.069s 00:10:59.973 sys 0m5.906s 00:10:59.973 23:01:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:59.973 23:01:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.973 ************************************ 00:10:59.973 END TEST nvmf_delete_subsystem 00:10:59.973 ************************************ 00:10:59.973 23:01:52 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:10:59.973 23:01:52 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:59.973 23:01:52 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:59.973 23:01:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:59.973 ************************************ 00:10:59.973 START TEST nvmf_ns_masking 00:10:59.973 ************************************ 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:10:59.973 * Looking for test storage... 00:10:59.973 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.973 23:01:52 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=a9f62892-c227-41b3-a1ff-8c6048331989 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.974 23:01:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:06.540 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.540 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:06.540 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:06.541 Found net devices under 0000:da:00.0: mlx_0_0 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:06.541 Found net devices under 0000:da:00.1: mlx_0_1 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:06.541 23:01:57 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:06.541 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:06.541 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:06.541 altname enp218s0f0np0 00:11:06.541 altname ens818f0np0 00:11:06.541 inet 192.168.100.8/24 scope global mlx_0_0 00:11:06.541 valid_lft forever preferred_lft forever 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:06.541 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:06.541 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:06.541 altname enp218s0f1np1 00:11:06.541 altname ens818f1np1 00:11:06.541 inet 192.168.100.9/24 scope global mlx_0_1 00:11:06.541 valid_lft forever preferred_lft forever 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:06.541 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:06.542 192.168.100.9' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:06.542 192.168.100.9' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:06.542 192.168.100.9' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=840956 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 840956 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 840956 ']' 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:06.542 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 [2024-06-07 23:01:58.169006] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:11:06.542 [2024-06-07 23:01:58.169052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.542 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.542 [2024-06-07 23:01:58.228184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.542 [2024-06-07 23:01:58.309279] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.542 [2024-06-07 23:01:58.309314] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.542 [2024-06-07 23:01:58.309321] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.542 [2024-06-07 23:01:58.309327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.542 [2024-06-07 23:01:58.309332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.542 [2024-06-07 23:01:58.309368] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.542 [2024-06-07 23:01:58.309465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.542 [2024-06-07 23:01:58.309550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.542 [2024-06-07 23:01:58.309550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.801 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:06.801 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:11:06.801 23:01:58 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.801 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:06.801 23:01:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.801 23:01:59 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.801 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:07.061 [2024-06-07 23:01:59.187703] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13569d0/0x135aec0) succeed. 00:11:07.061 [2024-06-07 23:01:59.196756] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1358010/0x139c550) succeed. 00:11:07.061 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:07.061 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:07.061 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:07.320 Malloc1 00:11:07.320 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:07.580 Malloc2 00:11:07.580 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.839 23:01:59 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:07.839 23:02:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:08.098 [2024-06-07 23:02:00.244160] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:08.098 23:02:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:08.098 23:02:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a9f62892-c227-41b3-a1ff-8c6048331989 -a 192.168.100.8 -s 4420 -i 4 00:11:08.357 23:02:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.357 23:02:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:08.357 23:02:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.357 23:02:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:11:08.357 23:02:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:10.896 23:02:02 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:10.897 [ 0]:0x1 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bd426d95a3f4a32862475e973a0352b 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bd426d95a3f4a32862475e973a0352b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:10.897 [ 0]:0x1 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bd426d95a3f4a32862475e973a0352b 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bd426d95a3f4a32862475e973a0352b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:10.897 [ 1]:0x2 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:10.897 23:02:02 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.159 23:02:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.417 23:02:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:11.417 23:02:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:11.417 23:02:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a9f62892-c227-41b3-a1ff-8c6048331989 -a 192.168.100.8 -s 4420 -i 4 00:11:11.984 23:02:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:11.984 23:02:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:11.984 23:02:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.984 23:02:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:11:11.984 23:02:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:11:11.984 23:02:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:13.887 23:02:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:13.887 [ 0]:0x2 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.887 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:14.145 [ 0]:0x1 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bd426d95a3f4a32862475e973a0352b 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bd426d95a3f4a32862475e973a0352b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:14.145 [ 1]:0x2 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.145 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:14.404 [ 0]:0x2 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:14.404 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:14.662 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:14.662 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.662 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:14.662 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.920 23:02:06 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.920 23:02:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:14.920 23:02:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a9f62892-c227-41b3-a1ff-8c6048331989 -a 192.168.100.8 -s 4420 -i 4 00:11:15.484 23:02:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:15.484 23:02:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:15.484 23:02:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.484 23:02:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:11:15.484 23:02:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:11:15.484 23:02:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:17.384 [ 0]:0x1 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bd426d95a3f4a32862475e973a0352b 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bd426d95a3f4a32862475e973a0352b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:17.384 [ 1]:0x2 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.384 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:17.642 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.643 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:17.643 [ 0]:0x2 00:11:17.643 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:17.643 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:17.901 23:02:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:17.901 [2024-06-07 23:02:10.103119] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:17.901 request: 00:11:17.901 { 00:11:17.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.901 "nsid": 2, 00:11:17.901 "host": "nqn.2016-06.io.spdk:host1", 00:11:17.901 "method": "nvmf_ns_remove_host", 00:11:17.901 "req_id": 1 00:11:17.901 } 00:11:17.901 Got JSON-RPC error response 00:11:17.901 response: 00:11:17.901 { 00:11:17.901 "code": -32602, 00:11:17.901 "message": "Invalid parameters" 00:11:17.901 } 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.901 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:18.159 [ 0]:0x2 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9331f70743c64f9ca3703de2348be90b 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9331f70743c64f9ca3703de2348be90b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:18.159 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.417 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:18.675 rmmod nvme_rdma 00:11:18.675 rmmod nvme_fabrics 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 840956 ']' 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 840956 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 840956 ']' 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 840956 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 840956 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 840956' 00:11:18.675 killing process with pid 840956 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 840956 00:11:18.675 23:02:10 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 840956 00:11:18.935 23:02:11 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.935 23:02:11 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:18.935 00:11:18.935 real 0m19.040s 00:11:18.935 user 0m55.625s 00:11:18.935 sys 0m5.743s 00:11:18.935 23:02:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:18.935 23:02:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.935 ************************************ 00:11:18.935 END TEST nvmf_ns_masking 00:11:18.935 ************************************ 00:11:18.935 23:02:11 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:18.935 23:02:11 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:18.935 23:02:11 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:18.935 23:02:11 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:18.935 23:02:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:18.935 ************************************ 00:11:18.935 START TEST nvmf_nvme_cli 00:11:18.935 ************************************ 00:11:18.935 23:02:11 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:19.193 * Looking for test storage... 00:11:19.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.193 23:02:11 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.194 23:02:11 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.841 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:25.842 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:25.842 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:25.842 Found net devices under 0000:da:00.0: mlx_0_0 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:25.842 Found net devices under 0000:da:00.1: mlx_0_1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:25.842 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.842 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:25.842 altname enp218s0f0np0 00:11:25.842 altname ens818f0np0 00:11:25.842 inet 192.168.100.8/24 scope global mlx_0_0 00:11:25.842 valid_lft forever preferred_lft forever 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.842 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:25.843 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.843 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:25.843 altname enp218s0f1np1 00:11:25.843 altname ens818f1np1 00:11:25.843 inet 192.168.100.9/24 scope global mlx_0_1 00:11:25.843 valid_lft forever preferred_lft forever 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:25.843 192.168.100.9' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:25.843 192.168.100.9' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:25.843 192.168.100.9' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=846769 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 846769 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 846769 ']' 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 23:02:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.843 [2024-06-07 23:02:17.411347] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:11:25.843 [2024-06-07 23:02:17.411390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.843 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.843 [2024-06-07 23:02:17.471997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.843 [2024-06-07 23:02:17.549289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.843 [2024-06-07 23:02:17.549322] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.843 [2024-06-07 23:02:17.549329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.843 [2024-06-07 23:02:17.549335] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.843 [2024-06-07 23:02:17.549340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.843 [2024-06-07 23:02:17.549386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.843 [2024-06-07 23:02:17.549400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.843 [2024-06-07 23:02:17.549494] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.843 [2024-06-07 23:02:17.549495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.102 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 [2024-06-07 23:02:18.286917] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22f19d0/0x22f5ec0) succeed. 00:11:26.102 [2024-06-07 23:02:18.295979] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22f3010/0x2337550) succeed. 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 Malloc0 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 Malloc1 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.362 [2024-06-07 23:02:18.488061] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:26.362 00:11:26.362 Discovery Log Number of Records 2, Generation counter 2 00:11:26.362 =====Discovery Log Entry 0====== 00:11:26.362 trtype: rdma 00:11:26.362 adrfam: ipv4 00:11:26.362 subtype: current discovery subsystem 00:11:26.362 treq: not required 00:11:26.362 portid: 0 00:11:26.362 trsvcid: 4420 00:11:26.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.362 traddr: 192.168.100.8 00:11:26.362 eflags: explicit discovery connections, duplicate discovery information 00:11:26.362 rdma_prtype: not specified 00:11:26.362 rdma_qptype: connected 00:11:26.362 rdma_cms: rdma-cm 00:11:26.362 rdma_pkey: 0x0000 00:11:26.362 =====Discovery Log Entry 1====== 00:11:26.362 trtype: rdma 00:11:26.362 adrfam: ipv4 00:11:26.362 subtype: nvme subsystem 00:11:26.362 treq: not required 00:11:26.362 portid: 0 00:11:26.362 trsvcid: 4420 00:11:26.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.362 traddr: 192.168.100.8 00:11:26.362 eflags: none 00:11:26.362 rdma_prtype: not specified 00:11:26.362 rdma_qptype: connected 00:11:26.362 rdma_cms: rdma-cm 00:11:26.362 rdma_pkey: 0x0000 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:26.362 23:02:18 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:27.310 23:02:19 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:27.310 23:02:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:11:27.310 23:02:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.310 23:02:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:11:27.310 23:02:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:11:27.310 23:02:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:29.841 /dev/nvme0n1 ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.841 23:02:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:29.842 23:02:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.409 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:30.409 rmmod nvme_rdma 00:11:30.668 rmmod nvme_fabrics 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 846769 ']' 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 846769 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 846769 ']' 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 846769 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 846769 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 846769' 00:11:30.668 killing process with pid 846769 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 846769 00:11:30.668 23:02:22 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 846769 00:11:30.927 23:02:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.927 23:02:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:30.927 00:11:30.927 real 0m11.879s 00:11:30.927 user 0m23.589s 00:11:30.927 sys 0m5.136s 00:11:30.927 23:02:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:30.927 23:02:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.927 ************************************ 00:11:30.927 END TEST nvmf_nvme_cli 00:11:30.927 ************************************ 00:11:30.927 23:02:23 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:11:30.927 23:02:23 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:30.927 23:02:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:30.927 23:02:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:30.927 23:02:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:30.927 ************************************ 00:11:30.927 START TEST nvmf_host_management 00:11:30.927 ************************************ 00:11:30.927 23:02:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:31.186 * Looking for test storage... 00:11:31.186 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.186 23:02:23 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.187 23:02:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:37.758 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:37.758 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:37.758 Found net devices under 0000:da:00.0: mlx_0_0 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:37.758 Found net devices under 0000:da:00.1: mlx_0_1 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.758 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:37.759 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:37.759 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:37.759 altname enp218s0f0np0 00:11:37.759 altname ens818f0np0 00:11:37.759 inet 192.168.100.8/24 scope global mlx_0_0 00:11:37.759 valid_lft forever preferred_lft forever 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:37.759 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:37.759 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:37.759 altname enp218s0f1np1 00:11:37.759 altname ens818f1np1 00:11:37.759 inet 192.168.100.9/24 scope global mlx_0_1 00:11:37.759 valid_lft forever preferred_lft forever 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:37.759 192.168.100.9' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:37.759 192.168.100.9' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:37.759 192.168.100.9' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=851308 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 851308 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 851308 ']' 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:37.759 23:02:29 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.760 [2024-06-07 23:02:29.520298] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:11:37.760 [2024-06-07 23:02:29.520343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.760 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.760 [2024-06-07 23:02:29.579642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.760 [2024-06-07 23:02:29.659081] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.760 [2024-06-07 23:02:29.659116] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.760 [2024-06-07 23:02:29.659123] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.760 [2024-06-07 23:02:29.659129] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.760 [2024-06-07 23:02:29.659134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.760 [2024-06-07 23:02:29.659231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.760 [2024-06-07 23:02:29.659317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.760 [2024-06-07 23:02:29.659423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.760 [2024-06-07 23:02:29.659424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.328 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.328 [2024-06-07 23:02:30.403472] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c01cc0/0x1c061b0) succeed. 00:11:38.329 [2024-06-07 23:02:30.412532] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c03300/0x1c47840) succeed. 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.329 Malloc0 00:11:38.329 [2024-06-07 23:02:30.586766] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:38.329 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=851453 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 851453 /var/tmp/bdevperf.sock 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 851453 ']' 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:38.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:38.588 { 00:11:38.588 "params": { 00:11:38.588 "name": "Nvme$subsystem", 00:11:38.588 "trtype": "$TEST_TRANSPORT", 00:11:38.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.588 "adrfam": "ipv4", 00:11:38.588 "trsvcid": "$NVMF_PORT", 00:11:38.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.588 "hdgst": ${hdgst:-false}, 00:11:38.588 "ddgst": ${ddgst:-false} 00:11:38.588 }, 00:11:38.588 "method": "bdev_nvme_attach_controller" 00:11:38.588 } 00:11:38.588 EOF 00:11:38.588 )") 00:11:38.588 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:38.589 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:38.589 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:38.589 23:02:30 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:38.589 "params": { 00:11:38.589 "name": "Nvme0", 00:11:38.589 "trtype": "rdma", 00:11:38.589 "traddr": "192.168.100.8", 00:11:38.589 "adrfam": "ipv4", 00:11:38.589 "trsvcid": "4420", 00:11:38.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:38.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:38.589 "hdgst": false, 00:11:38.589 "ddgst": false 00:11:38.589 }, 00:11:38.589 "method": "bdev_nvme_attach_controller" 00:11:38.589 }' 00:11:38.589 [2024-06-07 23:02:30.676794] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:11:38.589 [2024-06-07 23:02:30.676845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851453 ] 00:11:38.589 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.589 [2024-06-07 23:02:30.737526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.589 [2024-06-07 23:02:30.811506] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.848 Running I/O for 10 seconds... 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1603 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1603 -ge 100 ']' 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.417 [2024-06-07 23:02:31.595380] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 7 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:39.417 23:02:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:40.356 [2024-06-07 23:02:32.599440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:11:40.356 [2024-06-07 23:02:32.599473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:11:40.356 [2024-06-07 23:02:32.599497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:11:40.356 [2024-06-07 23:02:32.599729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:11:40.356 [2024-06-07 23:02:32.599744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:11:40.356 [2024-06-07 23:02:32.599759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:11:40.356 [2024-06-07 23:02:32.599773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:11:40.356 [2024-06-07 23:02:32.599789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:11:40.356 [2024-06-07 23:02:32.599806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:11:40.356 [2024-06-07 23:02:32.599821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e181000 len:0x10000 key:0x182400 00:11:40.356 [2024-06-07 23:02:32.599836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e160000 len:0x10000 key:0x182400 00:11:40.356 [2024-06-07 23:02:32.599853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.356 [2024-06-07 23:02:32.599862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:11:40.356 [2024-06-07 23:02:32.599869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.599990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.599999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.600005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.600024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.600039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.600055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:11:40.357 [2024-06-07 23:02:32.600071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:11:40.357 [2024-06-07 23:02:32.600276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:11:40.357 [2024-06-07 23:02:32.600291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:11:40.357 [2024-06-07 23:02:32.600305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:11:40.357 [2024-06-07 23:02:32.600319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.357 [2024-06-07 23:02:32.600328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.600433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:11:40.358 [2024-06-07 23:02:32.600440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:fbc0 p:0 m:0 dnr:0 00:11:40.358 [2024-06-07 23:02:32.602365] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:11:40.358 [2024-06-07 23:02:32.603275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 851453 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:40.358 { 00:11:40.358 "params": { 00:11:40.358 "name": "Nvme$subsystem", 00:11:40.358 "trtype": "$TEST_TRANSPORT", 00:11:40.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.358 "adrfam": "ipv4", 00:11:40.358 "trsvcid": "$NVMF_PORT", 00:11:40.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.358 "hdgst": ${hdgst:-false}, 00:11:40.358 "ddgst": ${ddgst:-false} 00:11:40.358 }, 00:11:40.358 "method": "bdev_nvme_attach_controller" 00:11:40.358 } 00:11:40.358 EOF 00:11:40.358 )") 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:40.358 23:02:32 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:40.358 "params": { 00:11:40.358 "name": "Nvme0", 00:11:40.358 "trtype": "rdma", 00:11:40.358 "traddr": "192.168.100.8", 00:11:40.358 "adrfam": "ipv4", 00:11:40.358 "trsvcid": "4420", 00:11:40.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:40.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:40.358 "hdgst": false, 00:11:40.358 "ddgst": false 00:11:40.358 }, 00:11:40.358 "method": "bdev_nvme_attach_controller" 00:11:40.358 }' 00:11:40.617 [2024-06-07 23:02:32.652198] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:11:40.617 [2024-06-07 23:02:32.652238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851828 ] 00:11:40.617 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.617 [2024-06-07 23:02:32.718040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.617 [2024-06-07 23:02:32.791836] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.876 Running I/O for 1 seconds... 00:11:41.811 00:11:41.811 Latency(us) 00:11:41.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.811 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:41.811 Verification LBA range: start 0x0 length 0x400 00:11:41.811 Nvme0n1 : 1.01 3027.07 189.19 0.00 0.00 20709.60 565.64 42941.68 00:11:41.811 =================================================================================================================== 00:11:41.811 Total : 3027.07 189.19 0.00 0.00 20709.60 565.64 42941.68 00:11:42.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 851453 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:42.070 rmmod nvme_rdma 00:11:42.070 rmmod nvme_fabrics 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 851308 ']' 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 851308 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 851308 ']' 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 851308 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 851308 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 851308' 00:11:42.070 killing process with pid 851308 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 851308 00:11:42.070 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 851308 00:11:42.328 [2024-06-07 23:02:34.534486] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:42.328 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.328 23:02:34 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:42.328 23:02:34 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:42.329 00:11:42.329 real 0m11.413s 00:11:42.329 user 0m24.719s 00:11:42.329 sys 0m5.544s 00:11:42.329 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:42.329 23:02:34 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.329 ************************************ 00:11:42.329 END TEST nvmf_host_management 00:11:42.329 ************************************ 00:11:42.329 23:02:34 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:42.329 23:02:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:42.329 23:02:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:42.329 23:02:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:42.587 ************************************ 00:11:42.587 START TEST nvmf_lvol 00:11:42.587 ************************************ 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:42.587 * Looking for test storage... 00:11:42.587 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.587 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.588 23:02:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:49.153 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:49.153 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:49.153 Found net devices under 0000:da:00.0: mlx_0_0 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:49.153 Found net devices under 0000:da:00.1: mlx_0_1 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:49.153 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:49.154 23:02:40 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:49.154 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:49.154 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:49.154 altname enp218s0f0np0 00:11:49.154 altname ens818f0np0 00:11:49.154 inet 192.168.100.8/24 scope global mlx_0_0 00:11:49.154 valid_lft forever preferred_lft forever 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:49.154 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:49.154 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:49.154 altname enp218s0f1np1 00:11:49.154 altname ens818f1np1 00:11:49.154 inet 192.168.100.9/24 scope global mlx_0_1 00:11:49.154 valid_lft forever preferred_lft forever 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:49.154 192.168.100.9' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:49.154 192.168.100.9' 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:11:49.154 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:49.155 192.168.100.9' 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=855632 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 855632 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 855632 ']' 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:49.155 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.155 [2024-06-07 23:02:41.182132] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:11:49.155 [2024-06-07 23:02:41.182184] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.155 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.155 [2024-06-07 23:02:41.245528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.155 [2024-06-07 23:02:41.321675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.155 [2024-06-07 23:02:41.321721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.155 [2024-06-07 23:02:41.321728] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.155 [2024-06-07 23:02:41.321733] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.155 [2024-06-07 23:02:41.321738] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.155 [2024-06-07 23:02:41.321785] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.155 [2024-06-07 23:02:41.321880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.155 [2024-06-07 23:02:41.321882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.721 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:49.721 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:11:49.721 23:02:41 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:49.721 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:49.721 23:02:41 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 23:02:42 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.980 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:49.980 [2024-06-07 23:02:42.194695] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x204bef0/0x20503e0) succeed. 00:11:49.980 [2024-06-07 23:02:42.203550] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x204d490/0x2091a70) succeed. 00:11:50.239 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.239 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:50.239 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.497 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:50.497 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:50.756 23:02:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:51.014 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d3fdde29-7f6c-4648-8ad8-2fb0dfebf894 00:11:51.014 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3fdde29-7f6c-4648-8ad8-2fb0dfebf894 lvol 20 00:11:51.014 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a25918c4-821e-43fe-b46c-b63f5cb30db7 00:11:51.014 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:51.273 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a25918c4-821e-43fe-b46c-b63f5cb30db7 00:11:51.560 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:51.560 [2024-06-07 23:02:43.717852] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:51.560 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:51.875 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=856131 00:11:51.875 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:51.875 23:02:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:51.875 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.813 23:02:44 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a25918c4-821e-43fe-b46c-b63f5cb30db7 MY_SNAPSHOT 00:11:53.073 23:02:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=081652ec-46e2-4551-a99d-bf19a98073ee 00:11:53.073 23:02:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a25918c4-821e-43fe-b46c-b63f5cb30db7 30 00:11:53.073 23:02:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 081652ec-46e2-4551-a99d-bf19a98073ee MY_CLONE 00:11:53.331 23:02:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=447bfae5-0fc5-4162-bff9-b3b33c82f769 00:11:53.331 23:02:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 447bfae5-0fc5-4162-bff9-b3b33c82f769 00:11:53.590 23:02:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 856131 00:12:03.565 Initializing NVMe Controllers 00:12:03.565 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:03.565 Controller IO queue size 128, less than required. 00:12:03.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:03.565 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:03.565 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:03.565 Initialization complete. Launching workers. 00:12:03.565 ======================================================== 00:12:03.565 Latency(us) 00:12:03.565 Device Information : IOPS MiB/s Average min max 00:12:03.565 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16427.90 64.17 7793.52 2373.90 48823.78 00:12:03.565 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16537.80 64.60 7742.21 3072.60 39629.68 00:12:03.565 ======================================================== 00:12:03.565 Total : 32965.70 128.77 7767.78 2373.90 48823.78 00:12:03.565 00:12:03.565 23:02:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:03.565 23:02:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a25918c4-821e-43fe-b46c-b63f5cb30db7 00:12:03.565 23:02:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3fdde29-7f6c-4648-8ad8-2fb0dfebf894 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:03.824 rmmod nvme_rdma 00:12:03.824 rmmod nvme_fabrics 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 855632 ']' 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 855632 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 855632 ']' 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 855632 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 855632 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 855632' 00:12:03.824 killing process with pid 855632 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 855632 00:12:03.824 23:02:55 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 855632 00:12:04.083 23:02:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.083 23:02:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:04.083 00:12:04.083 real 0m21.627s 00:12:04.083 user 1m10.984s 00:12:04.083 sys 0m5.948s 00:12:04.083 23:02:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:04.083 23:02:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:04.083 ************************************ 00:12:04.083 END TEST nvmf_lvol 00:12:04.083 ************************************ 00:12:04.083 23:02:56 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:04.083 23:02:56 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:04.083 23:02:56 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:04.083 23:02:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:04.083 ************************************ 00:12:04.083 START TEST nvmf_lvs_grow 00:12:04.083 ************************************ 00:12:04.083 23:02:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:04.342 * Looking for test storage... 00:12:04.342 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.342 23:02:56 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.343 23:02:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:10.908 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:10.908 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:10.908 Found net devices under 0000:da:00.0: mlx_0_0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:10.908 Found net devices under 0000:da:00.1: mlx_0_1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:10.908 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:10.908 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:10.908 altname enp218s0f0np0 00:12:10.908 altname ens818f0np0 00:12:10.908 inet 192.168.100.8/24 scope global mlx_0_0 00:12:10.908 valid_lft forever preferred_lft forever 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:10.908 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:10.908 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:10.908 altname enp218s0f1np1 00:12:10.908 altname ens818f1np1 00:12:10.908 inet 192.168.100.9/24 scope global mlx_0_1 00:12:10.908 valid_lft forever preferred_lft forever 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:10.908 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:10.909 192.168.100.9' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:10.909 192.168.100.9' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:10.909 192.168.100.9' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=861713 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 861713 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 861713 ']' 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:10.909 23:03:02 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.909 [2024-06-07 23:03:02.543448] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:10.909 [2024-06-07 23:03:02.543496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.909 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.909 [2024-06-07 23:03:02.603773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.909 [2024-06-07 23:03:02.683930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.909 [2024-06-07 23:03:02.683965] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.909 [2024-06-07 23:03:02.683972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.909 [2024-06-07 23:03:02.683977] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.909 [2024-06-07 23:03:02.683983] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.909 [2024-06-07 23:03:02.684000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.166 23:03:03 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:11.424 [2024-06-07 23:03:03.557065] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14fb830/0x14ffd20) succeed. 00:12:11.424 [2024-06-07 23:03:03.566471] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14fcd30/0x15413b0) succeed. 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.424 ************************************ 00:12:11.424 START TEST lvs_grow_clean 00:12:11.424 ************************************ 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.424 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:11.682 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:11.682 23:03:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:11.940 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:11.940 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:11.940 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:11.940 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:11.940 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:12.198 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 061f6513-d1ba-4ee9-a841-3b55ce896895 lvol 150 00:12:12.198 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=43cff73a-4fd6-4fc0-af0e-13b813fe12d3 00:12:12.198 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.198 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:12.455 [2024-06-07 23:03:04.554531] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:12.456 [2024-06-07 23:03:04.554578] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:12.456 true 00:12:12.456 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:12.456 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:12.456 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:12.456 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:12.714 23:03:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 43cff73a-4fd6-4fc0-af0e-13b813fe12d3 00:12:12.972 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:12.972 [2024-06-07 23:03:05.220753] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:12.972 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=862289 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 862289 /var/tmp/bdevperf.sock 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 862289 ']' 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:13.231 23:03:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:13.231 [2024-06-07 23:03:05.441244] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:13.231 [2024-06-07 23:03:05.441290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862289 ] 00:12:13.231 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.231 [2024-06-07 23:03:05.501026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.490 [2024-06-07 23:03:05.580743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.057 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:14.057 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:12:14.057 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:14.315 Nvme0n1 00:12:14.315 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:14.574 [ 00:12:14.574 { 00:12:14.574 "name": "Nvme0n1", 00:12:14.574 "aliases": [ 00:12:14.574 "43cff73a-4fd6-4fc0-af0e-13b813fe12d3" 00:12:14.574 ], 00:12:14.574 "product_name": "NVMe disk", 00:12:14.574 "block_size": 4096, 00:12:14.574 "num_blocks": 38912, 00:12:14.574 "uuid": "43cff73a-4fd6-4fc0-af0e-13b813fe12d3", 00:12:14.574 "assigned_rate_limits": { 00:12:14.574 "rw_ios_per_sec": 0, 00:12:14.574 "rw_mbytes_per_sec": 0, 00:12:14.574 "r_mbytes_per_sec": 0, 00:12:14.574 "w_mbytes_per_sec": 0 00:12:14.574 }, 00:12:14.574 "claimed": false, 00:12:14.574 "zoned": false, 00:12:14.574 "supported_io_types": { 00:12:14.574 "read": true, 00:12:14.574 "write": true, 00:12:14.574 "unmap": true, 00:12:14.574 "write_zeroes": true, 00:12:14.574 "flush": true, 00:12:14.574 "reset": true, 00:12:14.574 "compare": true, 00:12:14.574 "compare_and_write": true, 00:12:14.574 "abort": true, 00:12:14.574 "nvme_admin": true, 00:12:14.574 "nvme_io": true 00:12:14.574 }, 00:12:14.574 "memory_domains": [ 00:12:14.574 { 00:12:14.574 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:14.574 "dma_device_type": 0 00:12:14.574 } 00:12:14.574 ], 00:12:14.574 "driver_specific": { 00:12:14.574 "nvme": [ 00:12:14.574 { 00:12:14.574 "trid": { 00:12:14.574 "trtype": "RDMA", 00:12:14.574 "adrfam": "IPv4", 00:12:14.574 "traddr": "192.168.100.8", 00:12:14.574 "trsvcid": "4420", 00:12:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:14.574 }, 00:12:14.574 "ctrlr_data": { 00:12:14.574 "cntlid": 1, 00:12:14.574 "vendor_id": "0x8086", 00:12:14.574 "model_number": "SPDK bdev Controller", 00:12:14.574 "serial_number": "SPDK0", 00:12:14.574 "firmware_revision": "24.09", 00:12:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.574 "oacs": { 00:12:14.574 "security": 0, 00:12:14.574 "format": 0, 00:12:14.574 "firmware": 0, 00:12:14.574 "ns_manage": 0 00:12:14.574 }, 00:12:14.574 "multi_ctrlr": true, 00:12:14.574 "ana_reporting": false 00:12:14.574 }, 00:12:14.574 "vs": { 00:12:14.574 "nvme_version": "1.3" 00:12:14.574 }, 00:12:14.574 "ns_data": { 00:12:14.574 "id": 1, 00:12:14.574 "can_share": true 00:12:14.574 } 00:12:14.574 } 00:12:14.574 ], 00:12:14.574 "mp_policy": "active_passive" 00:12:14.574 } 00:12:14.574 } 00:12:14.574 ] 00:12:14.574 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=862447 00:12:14.574 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:14.574 23:03:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.574 Running I/O for 10 seconds... 00:12:15.510 Latency(us) 00:12:15.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.510 Nvme0n1 : 1.00 33824.00 132.12 0.00 0.00 0.00 0.00 0.00 00:12:15.510 =================================================================================================================== 00:12:15.510 Total : 33824.00 132.12 0.00 0.00 0.00 0.00 0.00 00:12:15.510 00:12:16.447 23:03:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:16.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.705 Nvme0n1 : 2.00 34306.50 134.01 0.00 0.00 0.00 0.00 0.00 00:12:16.705 =================================================================================================================== 00:12:16.705 Total : 34306.50 134.01 0.00 0.00 0.00 0.00 0.00 00:12:16.705 00:12:16.705 true 00:12:16.705 23:03:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:16.705 23:03:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:16.964 23:03:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:16.964 23:03:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:16.964 23:03:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 862447 00:12:17.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.530 Nvme0n1 : 3.00 34486.00 134.71 0.00 0.00 0.00 0.00 0.00 00:12:17.530 =================================================================================================================== 00:12:17.530 Total : 34486.00 134.71 0.00 0.00 0.00 0.00 0.00 00:12:17.530 00:12:18.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.503 Nvme0n1 : 4.00 34680.50 135.47 0.00 0.00 0.00 0.00 0.00 00:12:18.503 =================================================================================================================== 00:12:18.503 Total : 34680.50 135.47 0.00 0.00 0.00 0.00 0.00 00:12:18.503 00:12:19.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.879 Nvme0n1 : 5.00 34829.60 136.05 0.00 0.00 0.00 0.00 0.00 00:12:19.879 =================================================================================================================== 00:12:19.879 Total : 34829.60 136.05 0.00 0.00 0.00 0.00 0.00 00:12:19.879 00:12:20.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.815 Nvme0n1 : 6.00 34929.33 136.44 0.00 0.00 0.00 0.00 0.00 00:12:20.815 =================================================================================================================== 00:12:20.815 Total : 34929.33 136.44 0.00 0.00 0.00 0.00 0.00 00:12:20.815 00:12:21.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.749 Nvme0n1 : 7.00 35011.57 136.76 0.00 0.00 0.00 0.00 0.00 00:12:21.749 =================================================================================================================== 00:12:21.749 Total : 35011.57 136.76 0.00 0.00 0.00 0.00 0.00 00:12:21.749 00:12:22.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.687 Nvme0n1 : 8.00 35079.25 137.03 0.00 0.00 0.00 0.00 0.00 00:12:22.687 =================================================================================================================== 00:12:22.687 Total : 35079.25 137.03 0.00 0.00 0.00 0.00 0.00 00:12:22.687 00:12:23.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.688 Nvme0n1 : 9.00 35115.78 137.17 0.00 0.00 0.00 0.00 0.00 00:12:23.688 =================================================================================================================== 00:12:23.688 Total : 35115.78 137.17 0.00 0.00 0.00 0.00 0.00 00:12:23.688 00:12:24.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.625 Nvme0n1 : 10.00 35149.70 137.30 0.00 0.00 0.00 0.00 0.00 00:12:24.626 =================================================================================================================== 00:12:24.626 Total : 35149.70 137.30 0.00 0.00 0.00 0.00 0.00 00:12:24.626 00:12:24.626 00:12:24.626 Latency(us) 00:12:24.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.626 Nvme0n1 : 10.00 35150.08 137.30 0.00 0.00 3638.50 2262.55 14293.09 00:12:24.626 =================================================================================================================== 00:12:24.626 Total : 35150.08 137.30 0.00 0.00 3638.50 2262.55 14293.09 00:12:24.626 0 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 862289 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 862289 ']' 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 862289 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 862289 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 862289' 00:12:24.626 killing process with pid 862289 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 862289 00:12:24.626 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.626 00:12:24.626 Latency(us) 00:12:24.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.626 =================================================================================================================== 00:12:24.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.626 23:03:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 862289 00:12:24.885 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:25.144 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:25.144 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:25.144 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:25.403 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:25.403 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:25.403 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.662 [2024-06-07 23:03:17.709593] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:25.662 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:25.663 request: 00:12:25.663 { 00:12:25.663 "uuid": "061f6513-d1ba-4ee9-a841-3b55ce896895", 00:12:25.663 "method": "bdev_lvol_get_lvstores", 00:12:25.663 "req_id": 1 00:12:25.663 } 00:12:25.663 Got JSON-RPC error response 00:12:25.663 response: 00:12:25.663 { 00:12:25.663 "code": -19, 00:12:25.663 "message": "No such device" 00:12:25.663 } 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:25.663 23:03:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:25.921 aio_bdev 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 43cff73a-4fd6-4fc0-af0e-13b813fe12d3 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=43cff73a-4fd6-4fc0-af0e-13b813fe12d3 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:25.921 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:26.179 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 43cff73a-4fd6-4fc0-af0e-13b813fe12d3 -t 2000 00:12:26.179 [ 00:12:26.179 { 00:12:26.179 "name": "43cff73a-4fd6-4fc0-af0e-13b813fe12d3", 00:12:26.179 "aliases": [ 00:12:26.179 "lvs/lvol" 00:12:26.180 ], 00:12:26.180 "product_name": "Logical Volume", 00:12:26.180 "block_size": 4096, 00:12:26.180 "num_blocks": 38912, 00:12:26.180 "uuid": "43cff73a-4fd6-4fc0-af0e-13b813fe12d3", 00:12:26.180 "assigned_rate_limits": { 00:12:26.180 "rw_ios_per_sec": 0, 00:12:26.180 "rw_mbytes_per_sec": 0, 00:12:26.180 "r_mbytes_per_sec": 0, 00:12:26.180 "w_mbytes_per_sec": 0 00:12:26.180 }, 00:12:26.180 "claimed": false, 00:12:26.180 "zoned": false, 00:12:26.180 "supported_io_types": { 00:12:26.180 "read": true, 00:12:26.180 "write": true, 00:12:26.180 "unmap": true, 00:12:26.180 "write_zeroes": true, 00:12:26.180 "flush": false, 00:12:26.180 "reset": true, 00:12:26.180 "compare": false, 00:12:26.180 "compare_and_write": false, 00:12:26.180 "abort": false, 00:12:26.180 "nvme_admin": false, 00:12:26.180 "nvme_io": false 00:12:26.180 }, 00:12:26.180 "driver_specific": { 00:12:26.180 "lvol": { 00:12:26.180 "lvol_store_uuid": "061f6513-d1ba-4ee9-a841-3b55ce896895", 00:12:26.180 "base_bdev": "aio_bdev", 00:12:26.180 "thin_provision": false, 00:12:26.180 "num_allocated_clusters": 38, 00:12:26.180 "snapshot": false, 00:12:26.180 "clone": false, 00:12:26.180 "esnap_clone": false 00:12:26.180 } 00:12:26.180 } 00:12:26.180 } 00:12:26.180 ] 00:12:26.180 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:12:26.180 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:26.180 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:26.438 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:26.438 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:26.438 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:26.697 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:26.697 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 43cff73a-4fd6-4fc0-af0e-13b813fe12d3 00:12:26.697 23:03:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 061f6513-d1ba-4ee9-a841-3b55ce896895 00:12:26.955 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.214 00:12:27.214 real 0m15.649s 00:12:27.214 user 0m15.649s 00:12:27.214 sys 0m1.068s 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 ************************************ 00:12:27.214 END TEST lvs_grow_clean 00:12:27.214 ************************************ 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 ************************************ 00:12:27.214 START TEST lvs_grow_dirty 00:12:27.214 ************************************ 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.214 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.473 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:27.473 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:27.473 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:27.473 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:27.473 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:27.732 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:27.732 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:27.732 23:03:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 lvol 150 00:12:27.990 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:27.990 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.990 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:27.990 [2024-06-07 23:03:20.227733] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:27.990 [2024-06-07 23:03:20.227781] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:27.990 true 00:12:27.990 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:27.990 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:28.249 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:28.249 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:28.507 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:28.507 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:28.766 [2024-06-07 23:03:20.901925] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:28.766 23:03:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:29.024 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=865378 00:12:29.024 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:29.024 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:29.024 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 865378 /var/tmp/bdevperf.sock 00:12:29.024 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 865378 ']' 00:12:29.025 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.025 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:29.025 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.025 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:29.025 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:29.025 [2024-06-07 23:03:21.127045] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:29.025 [2024-06-07 23:03:21.127089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865378 ] 00:12:29.025 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.025 [2024-06-07 23:03:21.186915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.025 [2024-06-07 23:03:21.258929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.960 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:29.960 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:12:29.960 23:03:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:29.960 Nvme0n1 00:12:29.961 23:03:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:30.220 [ 00:12:30.220 { 00:12:30.220 "name": "Nvme0n1", 00:12:30.220 "aliases": [ 00:12:30.220 "b26b7097-e5d0-4550-b6f3-f654213f8e7e" 00:12:30.220 ], 00:12:30.220 "product_name": "NVMe disk", 00:12:30.220 "block_size": 4096, 00:12:30.220 "num_blocks": 38912, 00:12:30.220 "uuid": "b26b7097-e5d0-4550-b6f3-f654213f8e7e", 00:12:30.220 "assigned_rate_limits": { 00:12:30.220 "rw_ios_per_sec": 0, 00:12:30.220 "rw_mbytes_per_sec": 0, 00:12:30.220 "r_mbytes_per_sec": 0, 00:12:30.220 "w_mbytes_per_sec": 0 00:12:30.220 }, 00:12:30.220 "claimed": false, 00:12:30.220 "zoned": false, 00:12:30.220 "supported_io_types": { 00:12:30.220 "read": true, 00:12:30.220 "write": true, 00:12:30.220 "unmap": true, 00:12:30.220 "write_zeroes": true, 00:12:30.220 "flush": true, 00:12:30.220 "reset": true, 00:12:30.220 "compare": true, 00:12:30.220 "compare_and_write": true, 00:12:30.220 "abort": true, 00:12:30.220 "nvme_admin": true, 00:12:30.220 "nvme_io": true 00:12:30.220 }, 00:12:30.220 "memory_domains": [ 00:12:30.220 { 00:12:30.220 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:30.220 "dma_device_type": 0 00:12:30.220 } 00:12:30.220 ], 00:12:30.220 "driver_specific": { 00:12:30.220 "nvme": [ 00:12:30.220 { 00:12:30.220 "trid": { 00:12:30.220 "trtype": "RDMA", 00:12:30.220 "adrfam": "IPv4", 00:12:30.220 "traddr": "192.168.100.8", 00:12:30.220 "trsvcid": "4420", 00:12:30.220 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:30.220 }, 00:12:30.220 "ctrlr_data": { 00:12:30.220 "cntlid": 1, 00:12:30.220 "vendor_id": "0x8086", 00:12:30.220 "model_number": "SPDK bdev Controller", 00:12:30.220 "serial_number": "SPDK0", 00:12:30.220 "firmware_revision": "24.09", 00:12:30.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:30.220 "oacs": { 00:12:30.220 "security": 0, 00:12:30.220 "format": 0, 00:12:30.220 "firmware": 0, 00:12:30.220 "ns_manage": 0 00:12:30.220 }, 00:12:30.220 "multi_ctrlr": true, 00:12:30.220 "ana_reporting": false 00:12:30.220 }, 00:12:30.220 "vs": { 00:12:30.220 "nvme_version": "1.3" 00:12:30.220 }, 00:12:30.220 "ns_data": { 00:12:30.220 "id": 1, 00:12:30.220 "can_share": true 00:12:30.220 } 00:12:30.220 } 00:12:30.220 ], 00:12:30.220 "mp_policy": "active_passive" 00:12:30.220 } 00:12:30.220 } 00:12:30.220 ] 00:12:30.220 23:03:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=865618 00:12:30.220 23:03:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:30.220 23:03:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:30.220 Running I/O for 10 seconds... 00:12:31.595 Latency(us) 00:12:31.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.595 Nvme0n1 : 1.00 34467.00 134.64 0.00 0.00 0.00 0.00 0.00 00:12:31.595 =================================================================================================================== 00:12:31.595 Total : 34467.00 134.64 0.00 0.00 0.00 0.00 0.00 00:12:31.595 00:12:32.162 23:03:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:32.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.421 Nvme0n1 : 2.00 34816.00 136.00 0.00 0.00 0.00 0.00 0.00 00:12:32.421 =================================================================================================================== 00:12:32.421 Total : 34816.00 136.00 0.00 0.00 0.00 0.00 0.00 00:12:32.421 00:12:32.421 true 00:12:32.421 23:03:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:32.421 23:03:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:32.679 23:03:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:32.679 23:03:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:32.679 23:03:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 865618 00:12:33.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.245 Nvme0n1 : 3.00 34891.67 136.30 0.00 0.00 0.00 0.00 0.00 00:12:33.245 =================================================================================================================== 00:12:33.245 Total : 34891.67 136.30 0.00 0.00 0.00 0.00 0.00 00:12:33.245 00:12:34.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.180 Nvme0n1 : 4.00 35009.00 136.75 0.00 0.00 0.00 0.00 0.00 00:12:34.180 =================================================================================================================== 00:12:34.180 Total : 35009.00 136.75 0.00 0.00 0.00 0.00 0.00 00:12:34.180 00:12:35.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.557 Nvme0n1 : 5.00 35098.20 137.10 0.00 0.00 0.00 0.00 0.00 00:12:35.557 =================================================================================================================== 00:12:35.557 Total : 35098.20 137.10 0.00 0.00 0.00 0.00 0.00 00:12:35.557 00:12:36.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.492 Nvme0n1 : 6.00 35157.17 137.33 0.00 0.00 0.00 0.00 0.00 00:12:36.492 =================================================================================================================== 00:12:36.492 Total : 35157.17 137.33 0.00 0.00 0.00 0.00 0.00 00:12:36.492 00:12:37.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.428 Nvme0n1 : 7.00 35199.71 137.50 0.00 0.00 0.00 0.00 0.00 00:12:37.428 =================================================================================================================== 00:12:37.428 Total : 35199.71 137.50 0.00 0.00 0.00 0.00 0.00 00:12:37.428 00:12:38.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.361 Nvme0n1 : 8.00 35219.75 137.58 0.00 0.00 0.00 0.00 0.00 00:12:38.361 =================================================================================================================== 00:12:38.361 Total : 35219.75 137.58 0.00 0.00 0.00 0.00 0.00 00:12:38.361 00:12:39.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.296 Nvme0n1 : 9.00 35249.56 137.69 0.00 0.00 0.00 0.00 0.00 00:12:39.296 =================================================================================================================== 00:12:39.296 Total : 35249.56 137.69 0.00 0.00 0.00 0.00 0.00 00:12:39.296 00:12:40.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.230 Nvme0n1 : 10.00 35267.40 137.76 0.00 0.00 0.00 0.00 0.00 00:12:40.230 =================================================================================================================== 00:12:40.230 Total : 35267.40 137.76 0.00 0.00 0.00 0.00 0.00 00:12:40.230 00:12:40.230 00:12:40.230 Latency(us) 00:12:40.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.230 Nvme0n1 : 10.00 35267.35 137.76 0.00 0.00 3626.45 2231.34 14230.67 00:12:40.230 =================================================================================================================== 00:12:40.230 Total : 35267.35 137.76 0.00 0.00 3626.45 2231.34 14230.67 00:12:40.230 0 00:12:40.230 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 865378 00:12:40.230 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 865378 ']' 00:12:40.230 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 865378 00:12:40.230 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:12:40.230 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:40.230 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 865378 00:12:40.489 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:40.489 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:40.489 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 865378' 00:12:40.489 killing process with pid 865378 00:12:40.489 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 865378 00:12:40.489 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.489 00:12:40.489 Latency(us) 00:12:40.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.489 =================================================================================================================== 00:12:40.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.489 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 865378 00:12:40.489 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:40.748 23:03:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:41.005 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:41.005 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 861713 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 861713 00:12:41.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 861713 Killed "${NVMF_APP[@]}" "$@" 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=867441 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 867441 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 867441 ']' 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:41.264 23:03:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:41.264 [2024-06-07 23:03:33.376610] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:41.264 [2024-06-07 23:03:33.376655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.264 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.264 [2024-06-07 23:03:33.437153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.264 [2024-06-07 23:03:33.515255] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.264 [2024-06-07 23:03:33.515288] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.264 [2024-06-07 23:03:33.515295] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.264 [2024-06-07 23:03:33.515301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.264 [2024-06-07 23:03:33.515305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.264 [2024-06-07 23:03:33.515328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:42.199 [2024-06-07 23:03:34.359940] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:42.199 [2024-06-07 23:03:34.360043] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:42.199 [2024-06-07 23:03:34.360068] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:42.199 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:42.458 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b26b7097-e5d0-4550-b6f3-f654213f8e7e -t 2000 00:12:42.458 [ 00:12:42.458 { 00:12:42.458 "name": "b26b7097-e5d0-4550-b6f3-f654213f8e7e", 00:12:42.458 "aliases": [ 00:12:42.458 "lvs/lvol" 00:12:42.458 ], 00:12:42.458 "product_name": "Logical Volume", 00:12:42.458 "block_size": 4096, 00:12:42.458 "num_blocks": 38912, 00:12:42.458 "uuid": "b26b7097-e5d0-4550-b6f3-f654213f8e7e", 00:12:42.458 "assigned_rate_limits": { 00:12:42.458 "rw_ios_per_sec": 0, 00:12:42.458 "rw_mbytes_per_sec": 0, 00:12:42.458 "r_mbytes_per_sec": 0, 00:12:42.458 "w_mbytes_per_sec": 0 00:12:42.458 }, 00:12:42.458 "claimed": false, 00:12:42.458 "zoned": false, 00:12:42.458 "supported_io_types": { 00:12:42.458 "read": true, 00:12:42.458 "write": true, 00:12:42.458 "unmap": true, 00:12:42.458 "write_zeroes": true, 00:12:42.458 "flush": false, 00:12:42.458 "reset": true, 00:12:42.458 "compare": false, 00:12:42.458 "compare_and_write": false, 00:12:42.458 "abort": false, 00:12:42.458 "nvme_admin": false, 00:12:42.458 "nvme_io": false 00:12:42.458 }, 00:12:42.458 "driver_specific": { 00:12:42.458 "lvol": { 00:12:42.458 "lvol_store_uuid": "bd7dc77f-ce7a-49c6-89a3-0c21875f3054", 00:12:42.458 "base_bdev": "aio_bdev", 00:12:42.458 "thin_provision": false, 00:12:42.458 "num_allocated_clusters": 38, 00:12:42.458 "snapshot": false, 00:12:42.458 "clone": false, 00:12:42.458 "esnap_clone": false 00:12:42.458 } 00:12:42.458 } 00:12:42.458 } 00:12:42.458 ] 00:12:42.458 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:12:42.458 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:42.458 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:42.717 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:42.717 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:42.717 23:03:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:42.975 [2024-06-07 23:03:35.208435] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:42.975 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:42.976 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:42.976 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:43.234 request: 00:12:43.234 { 00:12:43.234 "uuid": "bd7dc77f-ce7a-49c6-89a3-0c21875f3054", 00:12:43.234 "method": "bdev_lvol_get_lvstores", 00:12:43.234 "req_id": 1 00:12:43.234 } 00:12:43.234 Got JSON-RPC error response 00:12:43.234 response: 00:12:43.234 { 00:12:43.234 "code": -19, 00:12:43.234 "message": "No such device" 00:12:43.234 } 00:12:43.234 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:12:43.234 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:43.234 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:43.234 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:43.234 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:43.493 aio_bdev 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:43.493 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b26b7097-e5d0-4550-b6f3-f654213f8e7e -t 2000 00:12:43.812 [ 00:12:43.812 { 00:12:43.812 "name": "b26b7097-e5d0-4550-b6f3-f654213f8e7e", 00:12:43.812 "aliases": [ 00:12:43.812 "lvs/lvol" 00:12:43.812 ], 00:12:43.812 "product_name": "Logical Volume", 00:12:43.812 "block_size": 4096, 00:12:43.812 "num_blocks": 38912, 00:12:43.812 "uuid": "b26b7097-e5d0-4550-b6f3-f654213f8e7e", 00:12:43.812 "assigned_rate_limits": { 00:12:43.812 "rw_ios_per_sec": 0, 00:12:43.812 "rw_mbytes_per_sec": 0, 00:12:43.812 "r_mbytes_per_sec": 0, 00:12:43.812 "w_mbytes_per_sec": 0 00:12:43.812 }, 00:12:43.812 "claimed": false, 00:12:43.812 "zoned": false, 00:12:43.812 "supported_io_types": { 00:12:43.812 "read": true, 00:12:43.812 "write": true, 00:12:43.812 "unmap": true, 00:12:43.812 "write_zeroes": true, 00:12:43.812 "flush": false, 00:12:43.812 "reset": true, 00:12:43.812 "compare": false, 00:12:43.812 "compare_and_write": false, 00:12:43.812 "abort": false, 00:12:43.813 "nvme_admin": false, 00:12:43.813 "nvme_io": false 00:12:43.813 }, 00:12:43.813 "driver_specific": { 00:12:43.813 "lvol": { 00:12:43.813 "lvol_store_uuid": "bd7dc77f-ce7a-49c6-89a3-0c21875f3054", 00:12:43.813 "base_bdev": "aio_bdev", 00:12:43.813 "thin_provision": false, 00:12:43.813 "num_allocated_clusters": 38, 00:12:43.813 "snapshot": false, 00:12:43.813 "clone": false, 00:12:43.813 "esnap_clone": false 00:12:43.813 } 00:12:43.813 } 00:12:43.813 } 00:12:43.813 ] 00:12:43.813 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:12:43.813 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:43.813 23:03:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:44.071 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:44.071 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:44.071 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:44.071 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:44.072 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b26b7097-e5d0-4550-b6f3-f654213f8e7e 00:12:44.330 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd7dc77f-ce7a-49c6-89a3-0c21875f3054 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:44.589 00:12:44.589 real 0m17.421s 00:12:44.589 user 0m45.585s 00:12:44.589 sys 0m2.900s 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:44.589 ************************************ 00:12:44.589 END TEST lvs_grow_dirty 00:12:44.589 ************************************ 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:12:44.589 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:44.589 nvmf_trace.0 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:44.848 rmmod nvme_rdma 00:12:44.848 rmmod nvme_fabrics 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 867441 ']' 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 867441 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 867441 ']' 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 867441 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 867441 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 867441' 00:12:44.848 killing process with pid 867441 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 867441 00:12:44.848 23:03:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 867441 00:12:45.107 23:03:37 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:45.107 00:12:45.107 real 0m40.814s 00:12:45.107 user 1m7.140s 00:12:45.107 sys 0m8.946s 00:12:45.107 23:03:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:45.107 23:03:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:45.107 ************************************ 00:12:45.107 END TEST nvmf_lvs_grow 00:12:45.107 ************************************ 00:12:45.107 23:03:37 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:45.107 23:03:37 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:45.107 23:03:37 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:45.107 23:03:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:45.107 ************************************ 00:12:45.107 START TEST nvmf_bdev_io_wait 00:12:45.107 ************************************ 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:45.107 * Looking for test storage... 00:12:45.107 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.107 23:03:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:51.673 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:51.673 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.673 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:51.674 Found net devices under 0000:da:00.0: mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:51.674 Found net devices under 0000:da:00.1: mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:51.674 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.674 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:51.674 altname enp218s0f0np0 00:12:51.674 altname ens818f0np0 00:12:51.674 inet 192.168.100.8/24 scope global mlx_0_0 00:12:51.674 valid_lft forever preferred_lft forever 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:51.674 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.674 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:51.674 altname enp218s0f1np1 00:12:51.674 altname ens818f1np1 00:12:51.674 inet 192.168.100.9/24 scope global mlx_0_1 00:12:51.674 valid_lft forever preferred_lft forever 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.674 192.168.100.9' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:51.674 192.168.100.9' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:51.674 192.168.100.9' 00:12:51.674 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:51.675 23:03:42 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=871516 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 871516 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 871516 ']' 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 [2024-06-07 23:03:43.044778] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:51.675 [2024-06-07 23:03:43.044819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.675 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.675 [2024-06-07 23:03:43.105569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.675 [2024-06-07 23:03:43.186785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.675 [2024-06-07 23:03:43.186820] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.675 [2024-06-07 23:03:43.186827] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.675 [2024-06-07 23:03:43.186832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.675 [2024-06-07 23:03:43.186837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.675 [2024-06-07 23:03:43.186886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.675 [2024-06-07 23:03:43.187040] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.675 [2024-06-07 23:03:43.187075] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.675 [2024-06-07 23:03:43.187076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.675 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.934 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:51.934 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.934 23:03:43 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 [2024-06-07 23:03:43.992654] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d5da10/0x1d61f00) succeed. 00:12:51.934 [2024-06-07 23:03:44.001467] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d5f050/0x1da3590) succeed. 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 Malloc0 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 [2024-06-07 23:03:44.173314] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=871628 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=871631 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.934 { 00:12:51.934 "params": { 00:12:51.934 "name": "Nvme$subsystem", 00:12:51.934 "trtype": "$TEST_TRANSPORT", 00:12:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.934 "adrfam": "ipv4", 00:12:51.934 "trsvcid": "$NVMF_PORT", 00:12:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.934 "hdgst": ${hdgst:-false}, 00:12:51.934 "ddgst": ${ddgst:-false} 00:12:51.934 }, 00:12:51.934 "method": "bdev_nvme_attach_controller" 00:12:51.934 } 00:12:51.934 EOF 00:12:51.934 )") 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=871634 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.934 { 00:12:51.934 "params": { 00:12:51.934 "name": "Nvme$subsystem", 00:12:51.934 "trtype": "$TEST_TRANSPORT", 00:12:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.934 "adrfam": "ipv4", 00:12:51.934 "trsvcid": "$NVMF_PORT", 00:12:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.934 "hdgst": ${hdgst:-false}, 00:12:51.934 "ddgst": ${ddgst:-false} 00:12:51.934 }, 00:12:51.934 "method": "bdev_nvme_attach_controller" 00:12:51.934 } 00:12:51.934 EOF 00:12:51.934 )") 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=871638 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.934 { 00:12:51.934 "params": { 00:12:51.934 "name": "Nvme$subsystem", 00:12:51.934 "trtype": "$TEST_TRANSPORT", 00:12:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.934 "adrfam": "ipv4", 00:12:51.934 "trsvcid": "$NVMF_PORT", 00:12:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.934 "hdgst": ${hdgst:-false}, 00:12:51.934 "ddgst": ${ddgst:-false} 00:12:51.934 }, 00:12:51.934 "method": "bdev_nvme_attach_controller" 00:12:51.934 } 00:12:51.934 EOF 00:12:51.934 )") 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.934 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.934 { 00:12:51.934 "params": { 00:12:51.935 "name": "Nvme$subsystem", 00:12:51.935 "trtype": "$TEST_TRANSPORT", 00:12:51.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.935 "adrfam": "ipv4", 00:12:51.935 "trsvcid": "$NVMF_PORT", 00:12:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.935 "hdgst": ${hdgst:-false}, 00:12:51.935 "ddgst": ${ddgst:-false} 00:12:51.935 }, 00:12:51.935 "method": "bdev_nvme_attach_controller" 00:12:51.935 } 00:12:51.935 EOF 00:12:51.935 )") 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 871628 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.935 "params": { 00:12:51.935 "name": "Nvme1", 00:12:51.935 "trtype": "rdma", 00:12:51.935 "traddr": "192.168.100.8", 00:12:51.935 "adrfam": "ipv4", 00:12:51.935 "trsvcid": "4420", 00:12:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.935 "hdgst": false, 00:12:51.935 "ddgst": false 00:12:51.935 }, 00:12:51.935 "method": "bdev_nvme_attach_controller" 00:12:51.935 }' 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.935 "params": { 00:12:51.935 "name": "Nvme1", 00:12:51.935 "trtype": "rdma", 00:12:51.935 "traddr": "192.168.100.8", 00:12:51.935 "adrfam": "ipv4", 00:12:51.935 "trsvcid": "4420", 00:12:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.935 "hdgst": false, 00:12:51.935 "ddgst": false 00:12:51.935 }, 00:12:51.935 "method": "bdev_nvme_attach_controller" 00:12:51.935 }' 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.935 "params": { 00:12:51.935 "name": "Nvme1", 00:12:51.935 "trtype": "rdma", 00:12:51.935 "traddr": "192.168.100.8", 00:12:51.935 "adrfam": "ipv4", 00:12:51.935 "trsvcid": "4420", 00:12:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.935 "hdgst": false, 00:12:51.935 "ddgst": false 00:12:51.935 }, 00:12:51.935 "method": "bdev_nvme_attach_controller" 00:12:51.935 }' 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:51.935 23:03:44 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.935 "params": { 00:12:51.935 "name": "Nvme1", 00:12:51.935 "trtype": "rdma", 00:12:51.935 "traddr": "192.168.100.8", 00:12:51.935 "adrfam": "ipv4", 00:12:51.935 "trsvcid": "4420", 00:12:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.935 "hdgst": false, 00:12:51.935 "ddgst": false 00:12:51.935 }, 00:12:51.935 "method": "bdev_nvme_attach_controller" 00:12:51.935 }' 00:12:52.193 [2024-06-07 23:03:44.218478] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:52.193 [2024-06-07 23:03:44.218531] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:52.193 [2024-06-07 23:03:44.221317] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:52.193 [2024-06-07 23:03:44.221361] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:52.193 [2024-06-07 23:03:44.224409] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:52.193 [2024-06-07 23:03:44.224454] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:52.193 [2024-06-07 23:03:44.226549] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:12:52.193 [2024-06-07 23:03:44.226587] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:52.193 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.193 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.193 [2024-06-07 23:03:44.409592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.193 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.452 [2024-06-07 23:03:44.484123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:12:52.452 [2024-06-07 23:03:44.507880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.452 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.452 [2024-06-07 23:03:44.565114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.452 [2024-06-07 23:03:44.590437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:12:52.452 [2024-06-07 23:03:44.625329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.452 [2024-06-07 23:03:44.638168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:12:52.452 [2024-06-07 23:03:44.699910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:12:52.711 Running I/O for 1 seconds... 00:12:52.711 Running I/O for 1 seconds... 00:12:52.711 Running I/O for 1 seconds... 00:12:52.711 Running I/O for 1 seconds... 00:12:53.649 00:12:53.649 Latency(us) 00:12:53.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.649 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:53.649 Nvme1n1 : 1.01 17963.83 70.17 0.00 0.00 7103.05 4244.24 14105.84 00:12:53.649 =================================================================================================================== 00:12:53.649 Total : 17963.83 70.17 0.00 0.00 7103.05 4244.24 14105.84 00:12:53.649 00:12:53.649 Latency(us) 00:12:53.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.649 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:53.649 Nvme1n1 : 1.00 256146.52 1000.57 0.00 0.00 497.67 199.92 1880.26 00:12:53.649 =================================================================================================================== 00:12:53.649 Total : 256146.52 1000.57 0.00 0.00 497.67 199.92 1880.26 00:12:53.649 00:12:53.649 Latency(us) 00:12:53.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.649 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:53.649 Nvme1n1 : 1.00 17298.17 67.57 0.00 0.00 7378.99 4618.73 16976.94 00:12:53.649 =================================================================================================================== 00:12:53.649 Total : 17298.17 67.57 0.00 0.00 7378.99 4618.73 16976.94 00:12:53.649 00:12:53.649 Latency(us) 00:12:53.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.649 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:53.649 Nvme1n1 : 1.00 15039.13 58.75 0.00 0.00 8490.39 3838.54 20472.20 00:12:53.649 =================================================================================================================== 00:12:53.650 Total : 15039.13 58.75 0.00 0.00 8490.39 3838.54 20472.20 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 871631 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 871634 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 871638 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.909 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:53.909 rmmod nvme_rdma 00:12:54.168 rmmod nvme_fabrics 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 871516 ']' 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 871516 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 871516 ']' 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 871516 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 871516 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 871516' 00:12:54.168 killing process with pid 871516 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 871516 00:12:54.168 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 871516 00:12:54.427 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.427 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:54.427 00:12:54.427 real 0m9.318s 00:12:54.427 user 0m20.428s 00:12:54.427 sys 0m5.636s 00:12:54.427 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:54.427 23:03:46 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 ************************************ 00:12:54.427 END TEST nvmf_bdev_io_wait 00:12:54.427 ************************************ 00:12:54.427 23:03:46 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:54.427 23:03:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:54.427 23:03:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:54.427 23:03:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:54.427 ************************************ 00:12:54.427 START TEST nvmf_queue_depth 00:12:54.427 ************************************ 00:12:54.427 23:03:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:54.427 * Looking for test storage... 00:12:54.427 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.427 23:03:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.427 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:54.427 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.428 23:03:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.687 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.687 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.687 23:03:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.687 23:03:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.355 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:01.356 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:01.356 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:01.356 Found net devices under 0000:da:00.0: mlx_0_0 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:01.356 Found net devices under 0000:da:00.1: mlx_0_1 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:01.356 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:01.356 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:01.356 altname enp218s0f0np0 00:13:01.356 altname ens818f0np0 00:13:01.356 inet 192.168.100.8/24 scope global mlx_0_0 00:13:01.356 valid_lft forever preferred_lft forever 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:01.356 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:01.357 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:01.357 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:01.357 altname enp218s0f1np1 00:13:01.357 altname ens818f1np1 00:13:01.357 inet 192.168.100.9/24 scope global mlx_0_1 00:13:01.357 valid_lft forever preferred_lft forever 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:01.357 192.168.100.9' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:01.357 192.168.100.9' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:01.357 192.168.100.9' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=875621 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 875621 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 875621 ']' 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:01.357 23:03:52 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 [2024-06-07 23:03:52.810737] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:13:01.357 [2024-06-07 23:03:52.810780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.357 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.357 [2024-06-07 23:03:52.870139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.357 [2024-06-07 23:03:52.947876] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.357 [2024-06-07 23:03:52.947908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.357 [2024-06-07 23:03:52.947915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.357 [2024-06-07 23:03:52.947921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.357 [2024-06-07 23:03:52.947926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.357 [2024-06-07 23:03:52.947942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.357 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:01.357 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:13:01.357 23:03:53 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.357 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:01.357 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 [2024-06-07 23:03:53.673847] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x81fb30/0x824020) succeed. 00:13:01.618 [2024-06-07 23:03:53.683512] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x821030/0x8656b0) succeed. 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 Malloc0 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 [2024-06-07 23:03:53.758927] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=875695 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 875695 /var/tmp/bdevperf.sock 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 875695 ']' 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:01.618 23:03:53 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:01.618 [2024-06-07 23:03:53.807808] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:13:01.618 [2024-06-07 23:03:53.807845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875695 ] 00:13:01.618 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.618 [2024-06-07 23:03:53.868576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.877 [2024-06-07 23:03:53.945774] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:02.444 NVMe0n1 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:02.444 23:03:54 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:02.701 Running I/O for 10 seconds... 00:13:12.678 00:13:12.678 Latency(us) 00:13:12.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:12.678 Verification LBA range: start 0x0 length 0x4000 00:13:12.678 NVMe0n1 : 10.04 17534.91 68.50 0.00 0.00 58253.02 22719.15 37199.48 00:13:12.678 =================================================================================================================== 00:13:12.678 Total : 17534.91 68.50 0.00 0.00 58253.02 22719.15 37199.48 00:13:12.678 0 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 875695 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 875695 ']' 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 875695 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 875695 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 875695' 00:13:12.678 killing process with pid 875695 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 875695 00:13:12.678 Received shutdown signal, test time was about 10.000000 seconds 00:13:12.678 00:13:12.678 Latency(us) 00:13:12.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.678 =================================================================================================================== 00:13:12.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.678 23:04:04 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 875695 00:13:12.937 23:04:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:12.937 23:04:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:12.938 rmmod nvme_rdma 00:13:12.938 rmmod nvme_fabrics 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 875621 ']' 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 875621 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 875621 ']' 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 875621 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 875621 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 875621' 00:13:12.938 killing process with pid 875621 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 875621 00:13:12.938 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 875621 00:13:13.197 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.197 23:04:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:13.197 00:13:13.197 real 0m18.853s 00:13:13.197 user 0m26.008s 00:13:13.197 sys 0m5.139s 00:13:13.197 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:13.197 23:04:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:13.197 ************************************ 00:13:13.197 END TEST nvmf_queue_depth 00:13:13.197 ************************************ 00:13:13.197 23:04:05 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:13.197 23:04:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:13.197 23:04:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:13.197 23:04:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:13.457 ************************************ 00:13:13.457 START TEST nvmf_target_multipath 00:13:13.457 ************************************ 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:13.457 * Looking for test storage... 00:13:13.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.457 23:04:05 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:20.027 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:20.028 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:20.028 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:20.028 Found net devices under 0000:da:00.0: mlx_0_0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:20.028 Found net devices under 0000:da:00.1: mlx_0_1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:20.028 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.028 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:20.028 altname enp218s0f0np0 00:13:20.028 altname ens818f0np0 00:13:20.028 inet 192.168.100.8/24 scope global mlx_0_0 00:13:20.028 valid_lft forever preferred_lft forever 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:20.028 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.028 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:20.028 altname enp218s0f1np1 00:13:20.028 altname ens818f1np1 00:13:20.028 inet 192.168.100.9/24 scope global mlx_0_1 00:13:20.028 valid_lft forever preferred_lft forever 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.028 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:20.029 192.168.100.9' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:20.029 192.168.100.9' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:20.029 192.168.100.9' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:13:20.029 run this test only with TCP transport for now 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:20.029 rmmod nvme_rdma 00:13:20.029 rmmod nvme_fabrics 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:20.029 00:13:20.029 real 0m6.109s 00:13:20.029 user 0m1.733s 00:13:20.029 sys 0m4.506s 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:20.029 23:04:11 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:20.029 ************************************ 00:13:20.029 END TEST nvmf_target_multipath 00:13:20.029 ************************************ 00:13:20.029 23:04:11 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:20.029 23:04:11 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:20.029 23:04:11 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:20.029 23:04:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:20.029 ************************************ 00:13:20.029 START TEST nvmf_zcopy 00:13:20.029 ************************************ 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:20.029 * Looking for test storage... 00:13:20.029 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.029 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.030 23:04:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:26.695 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:26.695 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:26.695 Found net devices under 0000:da:00.0: mlx_0_0 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:26.695 Found net devices under 0000:da:00.1: mlx_0_1 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.695 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:26.696 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:26.696 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:26.696 altname enp218s0f0np0 00:13:26.696 altname ens818f0np0 00:13:26.696 inet 192.168.100.8/24 scope global mlx_0_0 00:13:26.696 valid_lft forever preferred_lft forever 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:26.696 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:26.696 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:26.696 altname enp218s0f1np1 00:13:26.696 altname ens818f1np1 00:13:26.696 inet 192.168.100.9/24 scope global mlx_0_1 00:13:26.696 valid_lft forever preferred_lft forever 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:26.696 23:04:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:26.696 192.168.100.9' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:26.696 192.168.100.9' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:26.696 192.168.100.9' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=884662 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 884662 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 884662 ']' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.696 [2024-06-07 23:04:18.136891] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:13:26.696 [2024-06-07 23:04:18.136934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.696 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.696 [2024-06-07 23:04:18.197500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.696 [2024-06-07 23:04:18.273028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.696 [2024-06-07 23:04:18.273063] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.696 [2024-06-07 23:04:18.273070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.696 [2024-06-07 23:04:18.273080] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.696 [2024-06-07 23:04:18.273084] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.696 [2024-06-07 23:04:18.273103] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:26.696 Unsupported transport: rdma 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # type=--id 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # id=0 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:13:26.696 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:26.955 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:13:26.955 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:13:26.955 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # for n in $shm_files 00:13:26.955 23:04:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:26.956 nvmf_trace.0 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@822 -- # return 0 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:26.956 rmmod nvme_rdma 00:13:26.956 rmmod nvme_fabrics 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 884662 ']' 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 884662 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 884662 ']' 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 884662 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 884662 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 884662' 00:13:26.956 killing process with pid 884662 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 884662 00:13:26.956 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 884662 00:13:27.215 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.215 23:04:19 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:27.215 00:13:27.215 real 0m7.593s 00:13:27.215 user 0m3.142s 00:13:27.215 sys 0m5.062s 00:13:27.215 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:27.215 23:04:19 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 ************************************ 00:13:27.215 END TEST nvmf_zcopy 00:13:27.215 ************************************ 00:13:27.215 23:04:19 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:27.215 23:04:19 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:27.215 23:04:19 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:27.215 23:04:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 ************************************ 00:13:27.215 START TEST nvmf_nmic 00:13:27.215 ************************************ 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:27.215 * Looking for test storage... 00:13:27.215 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.215 23:04:19 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.216 23:04:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:33.781 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:33.782 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:33.782 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:33.782 Found net devices under 0000:da:00.0: mlx_0_0 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:33.782 Found net devices under 0000:da:00.1: mlx_0_1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:33.782 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:33.782 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:33.782 altname enp218s0f0np0 00:13:33.782 altname ens818f0np0 00:13:33.782 inet 192.168.100.8/24 scope global mlx_0_0 00:13:33.782 valid_lft forever preferred_lft forever 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:33.782 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:33.782 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:33.782 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:33.782 altname enp218s0f1np1 00:13:33.783 altname ens818f1np1 00:13:33.783 inet 192.168.100.9/24 scope global mlx_0_1 00:13:33.783 valid_lft forever preferred_lft forever 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:33.783 192.168.100.9' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:33.783 192.168.100.9' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:33.783 192.168.100.9' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=888343 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 888343 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 888343 ']' 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:33.783 23:04:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:33.783 [2024-06-07 23:04:25.949914] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:13:33.783 [2024-06-07 23:04:25.949960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.783 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.783 [2024-06-07 23:04:26.012155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.042 [2024-06-07 23:04:26.092652] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.042 [2024-06-07 23:04:26.092689] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.042 [2024-06-07 23:04:26.092696] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.042 [2024-06-07 23:04:26.092701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.042 [2024-06-07 23:04:26.092707] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.042 [2024-06-07 23:04:26.092756] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.042 [2024-06-07 23:04:26.092852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.042 [2024-06-07 23:04:26.092942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.043 [2024-06-07 23:04:26.092943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.611 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.611 [2024-06-07 23:04:26.813970] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7009d0/0x704ec0) succeed. 00:13:34.611 [2024-06-07 23:04:26.823106] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x702010/0x746550) succeed. 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 Malloc0 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 [2024-06-07 23:04:26.989569] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:34.870 test case1: single bdev can't be used in multiple subsystems 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 [2024-06-07 23:04:27.013366] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:34.870 [2024-06-07 23:04:27.013385] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:34.870 [2024-06-07 23:04:27.013392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.870 request: 00:13:34.870 { 00:13:34.870 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:34.870 "namespace": { 00:13:34.870 "bdev_name": "Malloc0", 00:13:34.870 "no_auto_visible": false 00:13:34.870 }, 00:13:34.870 "method": "nvmf_subsystem_add_ns", 00:13:34.870 "req_id": 1 00:13:34.870 } 00:13:34.870 Got JSON-RPC error response 00:13:34.870 response: 00:13:34.870 { 00:13:34.870 "code": -32602, 00:13:34.870 "message": "Invalid parameters" 00:13:34.870 } 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:34.870 Adding namespace failed - expected result. 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:34.870 test case2: host connect to nvmf target in multiple paths 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:34.870 [2024-06-07 23:04:27.025424] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.870 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:35.806 23:04:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:13:36.742 23:04:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.742 23:04:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:13:36.742 23:04:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.742 23:04:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:36.742 23:04:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:13:39.275 23:04:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:39.275 [global] 00:13:39.275 thread=1 00:13:39.275 invalidate=1 00:13:39.275 rw=write 00:13:39.275 time_based=1 00:13:39.275 runtime=1 00:13:39.275 ioengine=libaio 00:13:39.275 direct=1 00:13:39.275 bs=4096 00:13:39.275 iodepth=1 00:13:39.275 norandommap=0 00:13:39.275 numjobs=1 00:13:39.275 00:13:39.275 verify_dump=1 00:13:39.275 verify_backlog=512 00:13:39.275 verify_state_save=0 00:13:39.275 do_verify=1 00:13:39.275 verify=crc32c-intel 00:13:39.275 [job0] 00:13:39.275 filename=/dev/nvme0n1 00:13:39.275 Could not set queue depth (nvme0n1) 00:13:39.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.276 fio-3.35 00:13:39.276 Starting 1 thread 00:13:40.212 00:13:40.212 job0: (groupid=0, jobs=1): err= 0: pid=889339: Fri Jun 7 23:04:32 2024 00:13:40.212 read: IOPS=7356, BW=28.7MiB/s (30.1MB/s)(28.8MiB/1001msec) 00:13:40.212 slat (nsec): min=6412, max=30272, avg=7085.08, stdev=733.35 00:13:40.212 clat (nsec): min=48042, max=94755, avg=57868.93, stdev=3918.94 00:13:40.212 lat (usec): min=55, max=125, avg=64.95, stdev= 4.01 00:13:40.212 clat percentiles (nsec): 00:13:40.212 | 1.00th=[50432], 5.00th=[51968], 10.00th=[52992], 20.00th=[54528], 00:13:40.212 | 30.00th=[55552], 40.00th=[56576], 50.00th=[57600], 60.00th=[58624], 00:13:40.212 | 70.00th=[59648], 80.00th=[61184], 90.00th=[62720], 95.00th=[64256], 00:13:40.212 | 99.00th=[68096], 99.50th=[69120], 99.90th=[74240], 99.95th=[80384], 00:13:40.212 | 99.99th=[94720] 00:13:40.212 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:13:40.212 slat (nsec): min=5657, max=31528, avg=8791.65, stdev=1054.21 00:13:40.212 clat (usec): min=39, max=226, avg=55.53, stdev= 4.48 00:13:40.212 lat (usec): min=49, max=241, avg=64.33, stdev= 4.71 00:13:40.212 clat percentiles (usec): 00:13:40.212 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:13:40.212 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:13:40.212 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 63], 00:13:40.212 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 75], 99.95th=[ 110], 00:13:40.212 | 99.99th=[ 227] 00:13:40.212 bw ( KiB/s): min=31728, max=31728, per=100.00%, avg=31728.00, stdev= 0.00, samples=1 00:13:40.212 iops : min= 7932, max= 7932, avg=7932.00, stdev= 0.00, samples=1 00:13:40.212 lat (usec) : 50=3.16%, 100=96.82%, 250=0.03% 00:13:40.212 cpu : usr=9.80%, sys=14.10%, ctx=15045, majf=0, minf=2 00:13:40.212 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.212 issued rwts: total=7364,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.212 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.212 00:13:40.212 Run status group 0 (all jobs): 00:13:40.212 READ: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=28.8MiB (30.2MB), run=1001-1001msec 00:13:40.212 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:13:40.212 00:13:40.212 Disk stats (read/write): 00:13:40.212 nvme0n1: ios=6706/6887, merge=0/0, ticks=370/304, in_queue=674, util=90.58% 00:13:40.212 23:04:32 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:42.116 23:04:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.116 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:13:42.116 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:42.116 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.375 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:42.375 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.375 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:13:42.375 23:04:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:42.376 rmmod nvme_rdma 00:13:42.376 rmmod nvme_fabrics 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 888343 ']' 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 888343 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 888343 ']' 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 888343 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 888343 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 888343' 00:13:42.376 killing process with pid 888343 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 888343 00:13:42.376 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 888343 00:13:42.635 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.635 23:04:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:42.635 00:13:42.635 real 0m15.453s 00:13:42.635 user 0m42.162s 00:13:42.635 sys 0m5.623s 00:13:42.635 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:42.635 23:04:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:42.635 ************************************ 00:13:42.635 END TEST nvmf_nmic 00:13:42.635 ************************************ 00:13:42.635 23:04:34 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:42.635 23:04:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:42.635 23:04:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:42.635 23:04:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:42.635 ************************************ 00:13:42.635 START TEST nvmf_fio_target 00:13:42.635 ************************************ 00:13:42.635 23:04:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:42.894 * Looking for test storage... 00:13:42.894 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.894 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.895 23:04:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:49.465 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:49.465 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:49.465 Found net devices under 0000:da:00.0: mlx_0_0 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:49.465 Found net devices under 0000:da:00.1: mlx_0_1 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.465 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:49.466 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.466 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:49.466 altname enp218s0f0np0 00:13:49.466 altname ens818f0np0 00:13:49.466 inet 192.168.100.8/24 scope global mlx_0_0 00:13:49.466 valid_lft forever preferred_lft forever 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:49.466 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.466 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:49.466 altname enp218s0f1np1 00:13:49.466 altname ens818f1np1 00:13:49.466 inet 192.168.100.9/24 scope global mlx_0_1 00:13:49.466 valid_lft forever preferred_lft forever 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:49.466 192.168.100.9' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:49.466 192.168.100.9' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:49.466 192.168.100.9' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=893388 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 893388 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 893388 ']' 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:49.466 23:04:40 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.466 [2024-06-07 23:04:40.993660] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:13:49.466 [2024-06-07 23:04:40.993703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.466 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.466 [2024-06-07 23:04:41.054613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.466 [2024-06-07 23:04:41.132518] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.466 [2024-06-07 23:04:41.132556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.466 [2024-06-07 23:04:41.132563] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.466 [2024-06-07 23:04:41.132569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.466 [2024-06-07 23:04:41.132573] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.466 [2024-06-07 23:04:41.132620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.466 [2024-06-07 23:04:41.132717] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.466 [2024-06-07 23:04:41.132802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.466 [2024-06-07 23:04:41.132803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.725 23:04:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:49.725 [2024-06-07 23:04:41.996087] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ade9d0/0x1ae2ec0) succeed. 00:13:49.984 [2024-06-07 23:04:42.005227] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ae0010/0x1b24550) succeed. 00:13:49.984 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.243 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:50.243 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.501 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:50.501 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.501 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:50.501 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.759 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:50.759 23:04:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:51.017 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:51.275 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:51.275 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:51.275 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:51.275 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:51.533 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:51.533 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:51.791 23:04:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.049 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:52.049 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.049 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:52.049 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.308 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.565 [2024-06-07 23:04:44.614051] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.565 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:52.565 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:52.823 23:04:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:53.757 23:04:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:53.757 23:04:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:13:53.757 23:04:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.757 23:04:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:13:53.757 23:04:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:13:53.757 23:04:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:13:55.716 23:04:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:55.998 [global] 00:13:55.998 thread=1 00:13:55.998 invalidate=1 00:13:55.998 rw=write 00:13:55.998 time_based=1 00:13:55.998 runtime=1 00:13:55.998 ioengine=libaio 00:13:55.998 direct=1 00:13:55.998 bs=4096 00:13:55.998 iodepth=1 00:13:55.998 norandommap=0 00:13:55.998 numjobs=1 00:13:55.998 00:13:55.998 verify_dump=1 00:13:55.998 verify_backlog=512 00:13:55.998 verify_state_save=0 00:13:55.998 do_verify=1 00:13:55.998 verify=crc32c-intel 00:13:55.998 [job0] 00:13:55.998 filename=/dev/nvme0n1 00:13:55.998 [job1] 00:13:55.998 filename=/dev/nvme0n2 00:13:55.998 [job2] 00:13:55.998 filename=/dev/nvme0n3 00:13:55.998 [job3] 00:13:55.998 filename=/dev/nvme0n4 00:13:55.998 Could not set queue depth (nvme0n1) 00:13:55.998 Could not set queue depth (nvme0n2) 00:13:55.998 Could not set queue depth (nvme0n3) 00:13:55.998 Could not set queue depth (nvme0n4) 00:13:56.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.262 fio-3.35 00:13:56.262 Starting 4 threads 00:13:57.634 00:13:57.634 job0: (groupid=0, jobs=1): err= 0: pid=894781: Fri Jun 7 23:04:49 2024 00:13:57.634 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:13:57.634 slat (nsec): min=6408, max=24735, avg=7014.56, stdev=672.47 00:13:57.634 clat (usec): min=59, max=111, avg=78.01, stdev= 5.54 00:13:57.634 lat (usec): min=70, max=118, avg=85.02, stdev= 5.57 00:13:57.634 clat percentiles (usec): 00:13:57.634 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:13:57.634 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:13:57.634 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 88], 00:13:57.634 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 106], 99.95th=[ 111], 00:13:57.634 | 99.99th=[ 112] 00:13:57.634 write: IOPS=5960, BW=23.3MiB/s (24.4MB/s)(23.3MiB/1001msec); 0 zone resets 00:13:57.634 slat (nsec): min=7920, max=69606, avg=8827.18, stdev=1080.43 00:13:57.634 clat (usec): min=61, max=310, avg=74.94, stdev= 6.47 00:13:57.634 lat (usec): min=69, max=319, avg=83.77, stdev= 6.61 00:13:57.634 clat percentiles (usec): 00:13:57.634 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:13:57.634 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:13:57.634 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 86], 00:13:57.634 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 106], 99.95th=[ 110], 00:13:57.634 | 99.99th=[ 310] 00:13:57.634 bw ( KiB/s): min=24576, max=24576, per=32.30%, avg=24576.00, stdev= 0.00, samples=1 00:13:57.634 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:13:57.634 lat (usec) : 100=99.70%, 250=0.29%, 500=0.01% 00:13:57.634 cpu : usr=6.20%, sys=12.40%, ctx=11599, majf=0, minf=1 00:13:57.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.634 issued rwts: total=5632,5966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.634 job1: (groupid=0, jobs=1): err= 0: pid=894796: Fri Jun 7 23:04:49 2024 00:13:57.634 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:13:57.634 slat (nsec): min=6387, max=24213, avg=7036.92, stdev=699.69 00:13:57.634 clat (usec): min=63, max=117, avg=79.10, stdev= 5.77 00:13:57.634 lat (usec): min=70, max=124, avg=86.14, stdev= 5.83 00:13:57.634 clat percentiles (usec): 00:13:57.634 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:13:57.634 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 80], 00:13:57.634 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 90], 00:13:57.634 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 112], 99.95th=[ 116], 00:13:57.634 | 99.99th=[ 118] 00:13:57.634 write: IOPS=5899, BW=23.0MiB/s (24.2MB/s)(23.1MiB/1001msec); 0 zone resets 00:13:57.634 slat (nsec): min=8025, max=35280, avg=8855.77, stdev=861.89 00:13:57.634 clat (usec): min=61, max=129, avg=74.74, stdev= 5.76 00:13:57.634 lat (usec): min=70, max=137, avg=83.60, stdev= 5.87 00:13:57.634 clat percentiles (usec): 00:13:57.634 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:13:57.634 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:13:57.634 | 70.00th=[ 77], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 85], 00:13:57.634 | 99.00th=[ 93], 99.50th=[ 98], 99.90th=[ 109], 99.95th=[ 111], 00:13:57.635 | 99.99th=[ 130] 00:13:57.635 bw ( KiB/s): min=24576, max=24576, per=32.30%, avg=24576.00, stdev= 0.00, samples=1 00:13:57.635 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:13:57.635 lat (usec) : 100=99.62%, 250=0.38% 00:13:57.635 cpu : usr=7.20%, sys=11.40%, ctx=11537, majf=0, minf=1 00:13:57.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.635 issued rwts: total=5632,5905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.635 job2: (groupid=0, jobs=1): err= 0: pid=894815: Fri Jun 7 23:04:49 2024 00:13:57.635 read: IOPS=3184, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1001msec) 00:13:57.635 slat (nsec): min=6606, max=24976, avg=7447.46, stdev=873.11 00:13:57.635 clat (usec): min=75, max=215, avg=141.34, stdev=24.18 00:13:57.635 lat (usec): min=83, max=222, avg=148.79, stdev=24.19 00:13:57.635 clat percentiles (usec): 00:13:57.635 | 1.00th=[ 90], 5.00th=[ 97], 10.00th=[ 103], 20.00th=[ 131], 00:13:57.635 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:13:57.635 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 180], 95.00th=[ 188], 00:13:57.635 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 210], 99.95th=[ 215], 00:13:57.635 | 99.99th=[ 217] 00:13:57.635 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:57.635 slat (nsec): min=8209, max=35276, avg=9461.22, stdev=1031.63 00:13:57.635 clat (usec): min=69, max=306, avg=133.06, stdev=22.05 00:13:57.635 lat (usec): min=79, max=314, avg=142.52, stdev=22.07 00:13:57.635 clat percentiles (usec): 00:13:57.635 | 1.00th=[ 82], 5.00th=[ 94], 10.00th=[ 100], 20.00th=[ 123], 00:13:57.635 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:13:57.635 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 167], 95.00th=[ 176], 00:13:57.635 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 200], 99.95th=[ 297], 00:13:57.635 | 99.99th=[ 306] 00:13:57.635 bw ( KiB/s): min=14736, max=14736, per=19.37%, avg=14736.00, stdev= 0.00, samples=1 00:13:57.635 iops : min= 3684, max= 3684, avg=3684.00, stdev= 0.00, samples=1 00:13:57.635 lat (usec) : 100=9.13%, 250=90.84%, 500=0.03% 00:13:57.635 cpu : usr=3.10%, sys=8.70%, ctx=6772, majf=0, minf=1 00:13:57.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.635 issued rwts: total=3188,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.635 job3: (groupid=0, jobs=1): err= 0: pid=894820: Fri Jun 7 23:04:49 2024 00:13:57.635 read: IOPS=3184, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1001msec) 00:13:57.635 slat (nsec): min=6472, max=24444, avg=7400.74, stdev=703.90 00:13:57.635 clat (usec): min=77, max=214, avg=141.35, stdev=23.98 00:13:57.635 lat (usec): min=84, max=222, avg=148.75, stdev=23.96 00:13:57.635 clat percentiles (usec): 00:13:57.635 | 1.00th=[ 89], 5.00th=[ 98], 10.00th=[ 103], 20.00th=[ 130], 00:13:57.635 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:13:57.635 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 188], 00:13:57.635 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 212], 00:13:57.635 | 99.99th=[ 215] 00:13:57.635 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:57.635 slat (nsec): min=8316, max=71449, avg=9478.56, stdev=1364.15 00:13:57.635 clat (usec): min=71, max=326, avg=133.03, stdev=21.98 00:13:57.635 lat (usec): min=79, max=336, avg=142.51, stdev=22.00 00:13:57.635 clat percentiles (usec): 00:13:57.635 | 1.00th=[ 83], 5.00th=[ 93], 10.00th=[ 101], 20.00th=[ 124], 00:13:57.635 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:13:57.635 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 167], 95.00th=[ 176], 00:13:57.635 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 310], 00:13:57.635 | 99.99th=[ 326] 00:13:57.635 bw ( KiB/s): min=14744, max=14744, per=19.38%, avg=14744.00, stdev= 0.00, samples=1 00:13:57.635 iops : min= 3686, max= 3686, avg=3686.00, stdev= 0.00, samples=1 00:13:57.635 lat (usec) : 100=8.58%, 250=91.39%, 500=0.03% 00:13:57.635 cpu : usr=4.30%, sys=7.50%, ctx=6773, majf=0, minf=2 00:13:57.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.635 issued rwts: total=3188,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.635 00:13:57.635 Run status group 0 (all jobs): 00:13:57.635 READ: bw=68.8MiB/s (72.2MB/s), 12.4MiB/s-22.0MiB/s (13.0MB/s-23.0MB/s), io=68.9MiB (72.3MB), run=1001-1001msec 00:13:57.635 WRITE: bw=74.3MiB/s (77.9MB/s), 14.0MiB/s-23.3MiB/s (14.7MB/s-24.4MB/s), io=74.4MiB (78.0MB), run=1001-1001msec 00:13:57.635 00:13:57.635 Disk stats (read/write): 00:13:57.635 nvme0n1: ios=4855/5120, merge=0/0, ticks=350/343, in_queue=693, util=86.37% 00:13:57.635 nvme0n2: ios=4748/5120, merge=0/0, ticks=340/327, in_queue=667, util=86.90% 00:13:57.635 nvme0n3: ios=2719/3072, merge=0/0, ticks=360/381, in_queue=741, util=88.98% 00:13:57.635 nvme0n4: ios=2719/3072, merge=0/0, ticks=367/385, in_queue=752, util=89.74% 00:13:57.635 23:04:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:57.635 [global] 00:13:57.635 thread=1 00:13:57.635 invalidate=1 00:13:57.635 rw=randwrite 00:13:57.635 time_based=1 00:13:57.635 runtime=1 00:13:57.635 ioengine=libaio 00:13:57.635 direct=1 00:13:57.635 bs=4096 00:13:57.635 iodepth=1 00:13:57.635 norandommap=0 00:13:57.635 numjobs=1 00:13:57.635 00:13:57.635 verify_dump=1 00:13:57.635 verify_backlog=512 00:13:57.635 verify_state_save=0 00:13:57.635 do_verify=1 00:13:57.635 verify=crc32c-intel 00:13:57.635 [job0] 00:13:57.635 filename=/dev/nvme0n1 00:13:57.635 [job1] 00:13:57.635 filename=/dev/nvme0n2 00:13:57.635 [job2] 00:13:57.635 filename=/dev/nvme0n3 00:13:57.635 [job3] 00:13:57.635 filename=/dev/nvme0n4 00:13:57.635 Could not set queue depth (nvme0n1) 00:13:57.635 Could not set queue depth (nvme0n2) 00:13:57.635 Could not set queue depth (nvme0n3) 00:13:57.635 Could not set queue depth (nvme0n4) 00:13:57.635 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.635 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.635 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.635 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.635 fio-3.35 00:13:57.635 Starting 4 threads 00:13:59.006 00:13:59.006 job0: (groupid=0, jobs=1): err= 0: pid=895234: Fri Jun 7 23:04:51 2024 00:13:59.006 read: IOPS=3246, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:13:59.006 slat (nsec): min=8905, max=28243, avg=9926.13, stdev=1205.89 00:13:59.006 clat (usec): min=76, max=294, avg=136.67, stdev=18.54 00:13:59.006 lat (usec): min=86, max=305, avg=146.59, stdev=18.51 00:13:59.006 clat percentiles (usec): 00:13:59.006 | 1.00th=[ 89], 5.00th=[ 99], 10.00th=[ 121], 20.00th=[ 128], 00:13:59.006 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:13:59.006 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 176], 00:13:59.006 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 208], 99.95th=[ 212], 00:13:59.006 | 99.99th=[ 293] 00:13:59.006 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:59.006 slat (nsec): min=10832, max=76357, avg=11761.15, stdev=1645.40 00:13:59.006 clat (usec): min=75, max=356, avg=128.52, stdev=19.25 00:13:59.006 lat (usec): min=87, max=367, avg=140.28, stdev=19.36 00:13:59.006 clat percentiles (usec): 00:13:59.006 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 113], 20.00th=[ 120], 00:13:59.006 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:13:59.006 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 167], 00:13:59.006 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 239], 99.95th=[ 338], 00:13:59.006 | 99.99th=[ 355] 00:13:59.006 bw ( KiB/s): min=15528, max=15528, per=23.05%, avg=15528.00, stdev= 0.00, samples=1 00:13:59.006 iops : min= 3882, max= 3882, avg=3882.00, stdev= 0.00, samples=1 00:13:59.006 lat (usec) : 100=6.57%, 250=93.37%, 500=0.06% 00:13:59.006 cpu : usr=4.90%, sys=10.60%, ctx=6835, majf=0, minf=1 00:13:59.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:59.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.006 issued rwts: total=3250,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:59.006 job1: (groupid=0, jobs=1): err= 0: pid=895242: Fri Jun 7 23:04:51 2024 00:13:59.006 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:13:59.006 slat (nsec): min=6419, max=16913, avg=7104.69, stdev=568.22 00:13:59.006 clat (usec): min=64, max=195, avg=94.75, stdev=24.88 00:13:59.006 lat (usec): min=71, max=203, avg=101.86, stdev=24.97 00:13:59.006 clat percentiles (usec): 00:13:59.007 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:13:59.007 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 87], 00:13:59.007 | 70.00th=[ 114], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 137], 00:13:59.007 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 186], 00:13:59.007 | 99.99th=[ 196] 00:13:59.007 write: IOPS=5080, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1001msec); 0 zone resets 00:13:59.007 slat (nsec): min=7628, max=73354, avg=8742.69, stdev=1338.05 00:13:59.007 clat (usec): min=55, max=598, avg=91.88, stdev=25.53 00:13:59.007 lat (usec): min=68, max=607, avg=100.62, stdev=25.67 00:13:59.007 clat percentiles (usec): 00:13:59.007 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:13:59.007 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 84], 00:13:59.007 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 133], 00:13:59.007 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 223], 00:13:59.007 | 99.99th=[ 603] 00:13:59.007 bw ( KiB/s): min=16384, max=16384, per=24.32%, avg=16384.00, stdev= 0.00, samples=1 00:13:59.007 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:59.007 lat (usec) : 100=65.74%, 250=34.24%, 500=0.01%, 750=0.01% 00:13:59.007 cpu : usr=6.20%, sys=9.50%, ctx=9695, majf=0, minf=1 00:13:59.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:59.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.007 issued rwts: total=4608,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:59.007 job2: (groupid=0, jobs=1): err= 0: pid=895260: Fri Jun 7 23:04:51 2024 00:13:59.007 read: IOPS=4507, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1001msec) 00:13:59.007 slat (nsec): min=6457, max=28524, avg=7233.41, stdev=844.64 00:13:59.007 clat (usec): min=71, max=208, avg=101.03, stdev=20.05 00:13:59.007 lat (usec): min=79, max=215, avg=108.27, stdev=20.11 00:13:59.007 clat percentiles (usec): 00:13:59.007 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:13:59.007 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 97], 00:13:59.007 | 70.00th=[ 117], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 135], 00:13:59.007 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 180], 00:13:59.007 | 99.99th=[ 208] 00:13:59.007 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:13:59.007 slat (nsec): min=7912, max=40614, avg=8853.19, stdev=1051.67 00:13:59.007 clat (usec): min=67, max=414, avg=98.24, stdev=20.23 00:13:59.007 lat (usec): min=76, max=425, avg=107.10, stdev=20.46 00:13:59.007 clat percentiles (usec): 00:13:59.007 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:13:59.007 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 104], 00:13:59.007 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 130], 00:13:59.007 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 176], 00:13:59.007 | 99.99th=[ 416] 00:13:59.007 bw ( KiB/s): min=16384, max=16384, per=24.32%, avg=16384.00, stdev= 0.00, samples=1 00:13:59.007 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:59.007 lat (usec) : 100=60.63%, 250=39.35%, 500=0.02% 00:13:59.007 cpu : usr=6.90%, sys=8.00%, ctx=9120, majf=0, minf=1 00:13:59.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:59.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.007 issued rwts: total=4512,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:59.007 job3: (groupid=0, jobs=1): err= 0: pid=895265: Fri Jun 7 23:04:51 2024 00:13:59.007 read: IOPS=3244, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:13:59.007 slat (nsec): min=8972, max=24610, avg=10102.25, stdev=1105.10 00:13:59.007 clat (usec): min=88, max=321, avg=136.63, stdev=14.14 00:13:59.007 lat (usec): min=98, max=331, avg=146.73, stdev=14.11 00:13:59.007 clat percentiles (usec): 00:13:59.007 | 1.00th=[ 98], 5.00th=[ 116], 10.00th=[ 124], 20.00th=[ 129], 00:13:59.007 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:13:59.007 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 159], 00:13:59.007 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 200], 00:13:59.007 | 99.99th=[ 322] 00:13:59.007 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:59.007 slat (nsec): min=10892, max=76288, avg=11948.68, stdev=1840.68 00:13:59.007 clat (usec): min=82, max=352, avg=128.44, stdev=14.79 00:13:59.007 lat (usec): min=94, max=364, avg=140.39, stdev=14.93 00:13:59.007 clat percentiles (usec): 00:13:59.007 | 1.00th=[ 91], 5.00th=[ 104], 10.00th=[ 116], 20.00th=[ 121], 00:13:59.007 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 130], 00:13:59.007 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 153], 00:13:59.007 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 212], 99.95th=[ 330], 00:13:59.007 | 99.99th=[ 351] 00:13:59.007 bw ( KiB/s): min=15512, max=15512, per=23.02%, avg=15512.00, stdev= 0.00, samples=1 00:13:59.007 iops : min= 3878, max= 3878, avg=3878.00, stdev= 0.00, samples=1 00:13:59.007 lat (usec) : 100=3.02%, 250=96.93%, 500=0.06% 00:13:59.007 cpu : usr=5.50%, sys=9.80%, ctx=6833, majf=0, minf=2 00:13:59.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:59.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.007 issued rwts: total=3248,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:59.007 00:13:59.007 Run status group 0 (all jobs): 00:13:59.007 READ: bw=60.9MiB/s (63.9MB/s), 12.7MiB/s-18.0MiB/s (13.3MB/s-18.9MB/s), io=61.0MiB (64.0MB), run=1001-1001msec 00:13:59.007 WRITE: bw=65.8MiB/s (69.0MB/s), 14.0MiB/s-19.8MiB/s (14.7MB/s-20.8MB/s), io=65.9MiB (69.1MB), run=1001-1001msec 00:13:59.007 00:13:59.007 Disk stats (read/write): 00:13:59.007 nvme0n1: ios=2857/3072, merge=0/0, ticks=371/367, in_queue=738, util=86.67% 00:13:59.007 nvme0n2: ios=3946/4096, merge=0/0, ticks=361/354, in_queue=715, util=87.22% 00:13:59.007 nvme0n3: ios=3584/4060, merge=0/0, ticks=356/377, in_queue=733, util=89.21% 00:13:59.007 nvme0n4: ios=2807/3072, merge=0/0, ticks=362/365, in_queue=727, util=89.86% 00:13:59.007 23:04:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:59.007 [global] 00:13:59.007 thread=1 00:13:59.007 invalidate=1 00:13:59.007 rw=write 00:13:59.007 time_based=1 00:13:59.007 runtime=1 00:13:59.007 ioengine=libaio 00:13:59.007 direct=1 00:13:59.007 bs=4096 00:13:59.007 iodepth=128 00:13:59.007 norandommap=0 00:13:59.007 numjobs=1 00:13:59.007 00:13:59.007 verify_dump=1 00:13:59.007 verify_backlog=512 00:13:59.007 verify_state_save=0 00:13:59.007 do_verify=1 00:13:59.007 verify=crc32c-intel 00:13:59.007 [job0] 00:13:59.007 filename=/dev/nvme0n1 00:13:59.007 [job1] 00:13:59.007 filename=/dev/nvme0n2 00:13:59.007 [job2] 00:13:59.007 filename=/dev/nvme0n3 00:13:59.007 [job3] 00:13:59.007 filename=/dev/nvme0n4 00:13:59.007 Could not set queue depth (nvme0n1) 00:13:59.007 Could not set queue depth (nvme0n2) 00:13:59.007 Could not set queue depth (nvme0n3) 00:13:59.007 Could not set queue depth (nvme0n4) 00:13:59.265 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.265 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.265 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.265 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.265 fio-3.35 00:13:59.265 Starting 4 threads 00:14:00.658 00:14:00.658 job0: (groupid=0, jobs=1): err= 0: pid=895684: Fri Jun 7 23:04:52 2024 00:14:00.658 read: IOPS=5910, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1004msec) 00:14:00.658 slat (nsec): min=1426, max=2867.3k, avg=84077.05, stdev=352846.15 00:14:00.658 clat (usec): min=2164, max=17532, avg=10846.45, stdev=3454.97 00:14:00.658 lat (usec): min=4749, max=17534, avg=10930.53, stdev=3463.04 00:14:00.658 clat percentiles (usec): 00:14:00.658 | 1.00th=[ 6980], 5.00th=[ 7832], 10.00th=[ 8029], 20.00th=[ 8160], 00:14:00.658 | 30.00th=[ 8291], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:14:00.658 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:14:00.658 | 99.00th=[16057], 99.50th=[16319], 99.90th=[16450], 99.95th=[17433], 00:14:00.658 | 99.99th=[17433] 00:14:00.658 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:14:00.658 slat (nsec): min=1877, max=2760.2k, avg=78671.01, stdev=330405.69 00:14:00.658 clat (usec): min=5440, max=15545, avg=10203.43, stdev=3339.87 00:14:00.658 lat (usec): min=5447, max=15548, avg=10282.10, stdev=3350.79 00:14:00.658 clat percentiles (usec): 00:14:00.658 | 1.00th=[ 6718], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7767], 00:14:00.658 | 30.00th=[ 7898], 40.00th=[ 7963], 50.00th=[ 8029], 60.00th=[ 8225], 00:14:00.658 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15139], 95.00th=[15270], 00:14:00.658 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:14:00.658 | 99.99th=[15533] 00:14:00.658 bw ( KiB/s): min=16384, max=32768, per=22.31%, avg=24576.00, stdev=11585.24, samples=2 00:14:00.658 iops : min= 4096, max= 8192, avg=6144.00, stdev=2896.31, samples=2 00:14:00.658 lat (msec) : 4=0.01%, 10=64.66%, 20=35.33% 00:14:00.658 cpu : usr=3.39%, sys=3.49%, ctx=776, majf=0, minf=1 00:14:00.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:00.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.658 issued rwts: total=5934,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.658 job1: (groupid=0, jobs=1): err= 0: pid=895696: Fri Jun 7 23:04:52 2024 00:14:00.658 read: IOPS=8622, BW=33.7MiB/s (35.3MB/s)(33.8MiB/1003msec) 00:14:00.658 slat (nsec): min=1398, max=1195.2k, avg=57680.03, stdev=213886.99 00:14:00.658 clat (usec): min=2078, max=9207, avg=7488.91, stdev=689.38 00:14:00.658 lat (usec): min=2938, max=9519, avg=7546.59, stdev=682.32 00:14:00.658 clat percentiles (usec): 00:14:00.658 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 6980], 00:14:00.658 | 30.00th=[ 7046], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7898], 00:14:00.658 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8356], 95.00th=[ 8455], 00:14:00.658 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9110], 99.95th=[ 9241], 00:14:00.658 | 99.99th=[ 9241] 00:14:00.658 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:14:00.658 slat (nsec): min=1878, max=1610.9k, avg=55269.45, stdev=203283.25 00:14:00.658 clat (usec): min=5261, max=8753, avg=7153.94, stdev=647.21 00:14:00.658 lat (usec): min=5269, max=9219, avg=7209.21, stdev=642.75 00:14:00.658 clat percentiles (usec): 00:14:00.658 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6587], 00:14:00.658 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7570], 00:14:00.658 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 7963], 95.00th=[ 8094], 00:14:00.658 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[ 8455], 99.95th=[ 8455], 00:14:00.658 | 99.99th=[ 8717] 00:14:00.658 bw ( KiB/s): min=32768, max=36864, per=31.61%, avg=34816.00, stdev=2896.31, samples=2 00:14:00.658 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:14:00.658 lat (msec) : 4=0.18%, 10=99.82% 00:14:00.658 cpu : usr=3.89%, sys=4.09%, ctx=1136, majf=0, minf=1 00:14:00.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:00.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.658 issued rwts: total=8648,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.658 job2: (groupid=0, jobs=1): err= 0: pid=895709: Fri Jun 7 23:04:52 2024 00:14:00.659 read: IOPS=6941, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1002msec) 00:14:00.659 slat (nsec): min=1407, max=1710.0k, avg=71201.76, stdev=271320.64 00:14:00.659 clat (usec): min=1463, max=11072, avg=9130.80, stdev=962.54 00:14:00.659 lat (usec): min=2432, max=11178, avg=9202.00, stdev=929.94 00:14:00.659 clat percentiles (usec): 00:14:00.659 | 1.00th=[ 6652], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:14:00.659 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9634], 00:14:00.659 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10290], 95.00th=[10421], 00:14:00.659 | 99.00th=[10421], 99.50th=[10552], 99.90th=[10552], 99.95th=[10945], 00:14:00.659 | 99.99th=[11076] 00:14:00.659 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:14:00.659 slat (nsec): min=1983, max=2899.7k, avg=67912.30, stdev=268446.90 00:14:00.659 clat (usec): min=6459, max=12490, avg=8830.74, stdev=773.45 00:14:00.659 lat (usec): min=7145, max=12502, avg=8898.65, stdev=742.38 00:14:00.659 clat percentiles (usec): 00:14:00.659 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8029], 20.00th=[ 8160], 00:14:00.659 | 30.00th=[ 8291], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 9241], 00:14:00.659 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10028], 00:14:00.659 | 99.00th=[10290], 99.50th=[10290], 99.90th=[11076], 99.95th=[12387], 00:14:00.659 | 99.99th=[12518] 00:14:00.659 bw ( KiB/s): min=26688, max=30656, per=26.03%, avg=28672.00, stdev=2805.80, samples=2 00:14:00.659 iops : min= 6672, max= 7664, avg=7168.00, stdev=701.45, samples=2 00:14:00.659 lat (msec) : 2=0.01%, 4=0.25%, 10=85.87%, 20=13.87% 00:14:00.659 cpu : usr=3.00%, sys=4.30%, ctx=938, majf=0, minf=1 00:14:00.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:00.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.659 issued rwts: total=6955,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.659 job3: (groupid=0, jobs=1): err= 0: pid=895711: Fri Jun 7 23:04:52 2024 00:14:00.659 read: IOPS=5215, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec) 00:14:00.659 slat (nsec): min=1500, max=3559.7k, avg=93713.34, stdev=424523.43 00:14:00.659 clat (usec): min=376, max=16232, avg=11911.89, stdev=2854.83 00:14:00.659 lat (usec): min=3187, max=16237, avg=12005.60, stdev=2857.10 00:14:00.659 clat percentiles (usec): 00:14:00.659 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9503], 00:14:00.659 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[11338], 00:14:00.659 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:14:00.659 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:14:00.659 | 99.99th=[16188] 00:14:00.659 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:14:00.659 slat (usec): min=2, max=3482, avg=87.57, stdev=391.00 00:14:00.659 clat (usec): min=8378, max=16798, avg=11455.87, stdev=2765.63 00:14:00.659 lat (usec): min=8385, max=16802, avg=11543.44, stdev=2771.35 00:14:00.659 clat percentiles (usec): 00:14:00.659 | 1.00th=[ 8717], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9110], 00:14:00.659 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10421], 00:14:00.659 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15139], 95.00th=[15401], 00:14:00.659 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16188], 99.95th=[16188], 00:14:00.659 | 99.99th=[16909] 00:14:00.659 bw ( KiB/s): min=17152, max=27808, per=20.41%, avg=22480.00, stdev=7534.93, samples=2 00:14:00.659 iops : min= 4288, max= 6952, avg=5620.00, stdev=1883.73, samples=2 00:14:00.659 lat (usec) : 500=0.01% 00:14:00.659 lat (msec) : 4=0.29%, 10=51.07%, 20=48.63% 00:14:00.659 cpu : usr=2.69%, sys=4.49%, ctx=724, majf=0, minf=1 00:14:00.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:00.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.659 issued rwts: total=5236,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.659 00:14:00.659 Run status group 0 (all jobs): 00:14:00.659 READ: bw=104MiB/s (109MB/s), 20.4MiB/s-33.7MiB/s (21.4MB/s-35.3MB/s), io=105MiB (110MB), run=1002-1004msec 00:14:00.659 WRITE: bw=108MiB/s (113MB/s), 21.9MiB/s-33.9MiB/s (23.0MB/s-35.5MB/s), io=108MiB (113MB), run=1002-1004msec 00:14:00.659 00:14:00.659 Disk stats (read/write): 00:14:00.659 nvme0n1: ios=5310/5632, merge=0/0, ticks=15706/16071, in_queue=31777, util=86.97% 00:14:00.659 nvme0n2: ios=7168/7561, merge=0/0, ticks=26735/26721, in_queue=53456, util=87.44% 00:14:00.659 nvme0n3: ios=5835/6144, merge=0/0, ticks=17624/17566, in_queue=35190, util=89.25% 00:14:00.659 nvme0n4: ios=4608/5062, merge=0/0, ticks=13210/13599, in_queue=26809, util=89.80% 00:14:00.659 23:04:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:00.659 [global] 00:14:00.659 thread=1 00:14:00.659 invalidate=1 00:14:00.659 rw=randwrite 00:14:00.659 time_based=1 00:14:00.659 runtime=1 00:14:00.659 ioengine=libaio 00:14:00.659 direct=1 00:14:00.659 bs=4096 00:14:00.659 iodepth=128 00:14:00.659 norandommap=0 00:14:00.659 numjobs=1 00:14:00.659 00:14:00.659 verify_dump=1 00:14:00.659 verify_backlog=512 00:14:00.659 verify_state_save=0 00:14:00.659 do_verify=1 00:14:00.659 verify=crc32c-intel 00:14:00.659 [job0] 00:14:00.659 filename=/dev/nvme0n1 00:14:00.659 [job1] 00:14:00.659 filename=/dev/nvme0n2 00:14:00.659 [job2] 00:14:00.659 filename=/dev/nvme0n3 00:14:00.659 [job3] 00:14:00.659 filename=/dev/nvme0n4 00:14:00.659 Could not set queue depth (nvme0n1) 00:14:00.659 Could not set queue depth (nvme0n2) 00:14:00.659 Could not set queue depth (nvme0n3) 00:14:00.659 Could not set queue depth (nvme0n4) 00:14:00.921 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.921 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.921 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.921 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.921 fio-3.35 00:14:00.921 Starting 4 threads 00:14:02.292 00:14:02.292 job0: (groupid=0, jobs=1): err= 0: pid=896083: Fri Jun 7 23:04:54 2024 00:14:02.292 read: IOPS=7897, BW=30.8MiB/s (32.3MB/s)(30.9MiB/1001msec) 00:14:02.292 slat (nsec): min=1428, max=1168.4k, avg=62731.10, stdev=213278.72 00:14:02.292 clat (usec): min=694, max=17261, avg=8051.78, stdev=3772.86 00:14:02.292 lat (usec): min=1399, max=17267, avg=8114.51, stdev=3798.18 00:14:02.292 clat percentiles (usec): 00:14:02.292 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5669], 00:14:02.292 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 6849], 60.00th=[ 7046], 00:14:02.292 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[15664], 95.00th=[15795], 00:14:02.292 | 99.00th=[16581], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:14:02.292 | 99.99th=[17171] 00:14:02.292 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:14:02.292 slat (nsec): min=1840, max=1843.2k, avg=59285.58, stdev=203203.20 00:14:02.292 clat (usec): min=4416, max=16679, avg=7678.49, stdev=3507.49 00:14:02.292 lat (usec): min=5036, max=16707, avg=7737.77, stdev=3530.38 00:14:02.292 clat percentiles (usec): 00:14:02.292 | 1.00th=[ 4686], 5.00th=[ 5145], 10.00th=[ 5211], 20.00th=[ 5342], 00:14:02.292 | 30.00th=[ 5473], 40.00th=[ 5932], 50.00th=[ 6521], 60.00th=[ 6652], 00:14:02.292 | 70.00th=[ 6783], 80.00th=[ 7177], 90.00th=[14877], 95.00th=[15270], 00:14:02.292 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:14:02.292 | 99.99th=[16712] 00:14:02.292 bw ( KiB/s): min=22792, max=22792, per=22.96%, avg=22792.00, stdev= 0.00, samples=1 00:14:02.292 iops : min= 5698, max= 5698, avg=5698.00, stdev= 0.00, samples=1 00:14:02.292 lat (usec) : 750=0.01% 00:14:02.292 lat (msec) : 2=0.10%, 4=0.30%, 10=80.44%, 20=19.15% 00:14:02.292 cpu : usr=2.50%, sys=4.70%, ctx=1518, majf=0, minf=1 00:14:02.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:02.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:02.292 issued rwts: total=7905,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:02.292 job1: (groupid=0, jobs=1): err= 0: pid=896084: Fri Jun 7 23:04:54 2024 00:14:02.292 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:14:02.292 slat (nsec): min=1461, max=2591.9k, avg=86290.10, stdev=293479.47 00:14:02.292 clat (usec): min=5715, max=20267, avg=11256.63, stdev=5005.99 00:14:02.292 lat (usec): min=6470, max=20287, avg=11342.92, stdev=5039.90 00:14:02.292 clat percentiles (usec): 00:14:02.292 | 1.00th=[ 6128], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 6980], 00:14:02.292 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[14615], 00:14:02.292 | 70.00th=[15401], 80.00th=[16057], 90.00th=[19006], 95.00th=[19268], 00:14:02.292 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:14:02.292 | 99.99th=[20317] 00:14:02.292 write: IOPS=5993, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1004msec); 0 zone resets 00:14:02.292 slat (nsec): min=1947, max=2544.4k, avg=82462.32, stdev=273109.74 00:14:02.292 clat (usec): min=2714, max=21014, avg=10589.32, stdev=4869.25 00:14:02.292 lat (usec): min=3503, max=21018, avg=10671.78, stdev=4901.54 00:14:02.292 clat percentiles (usec): 00:14:02.292 | 1.00th=[ 5866], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:14:02.292 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7439], 00:14:02.292 | 70.00th=[15139], 80.00th=[15664], 90.00th=[18482], 95.00th=[18744], 00:14:02.292 | 99.00th=[19268], 99.50th=[19268], 99.90th=[21103], 99.95th=[21103], 00:14:02.292 | 99.99th=[21103] 00:14:02.292 bw ( KiB/s): min=14352, max=32768, per=23.73%, avg=23560.00, stdev=13022.08, samples=2 00:14:02.292 iops : min= 3588, max= 8192, avg=5890.00, stdev=3255.52, samples=2 00:14:02.292 lat (msec) : 4=0.02%, 10=58.49%, 20=41.27%, 50=0.23% 00:14:02.292 cpu : usr=2.39%, sys=4.79%, ctx=1612, majf=0, minf=1 00:14:02.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:02.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:02.292 issued rwts: total=5632,6017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:02.292 job2: (groupid=0, jobs=1): err= 0: pid=896085: Fri Jun 7 23:04:54 2024 00:14:02.292 read: IOPS=3827, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1004msec) 00:14:02.292 slat (nsec): min=1603, max=1891.0k, avg=126441.77, stdev=321456.02 00:14:02.292 clat (usec): min=2713, max=21692, avg=16085.86, stdev=1844.02 00:14:02.292 lat (usec): min=4372, max=21694, avg=16212.30, stdev=1837.76 00:14:02.292 clat percentiles (usec): 00:14:02.292 | 1.00th=[ 9896], 5.00th=[14484], 10.00th=[14746], 20.00th=[15270], 00:14:02.292 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:14:02.292 | 70.00th=[15926], 80.00th=[17957], 90.00th=[19006], 95.00th=[19268], 00:14:02.292 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:14:02.292 | 99.99th=[21627] 00:14:02.292 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:14:02.292 slat (usec): min=2, max=2005, avg=122.64, stdev=316.92 00:14:02.292 clat (usec): min=9853, max=19587, avg=15878.68, stdev=1832.55 00:14:02.293 lat (usec): min=10591, max=19591, avg=16001.32, stdev=1837.49 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[12387], 5.00th=[13960], 10.00th=[14222], 20.00th=[14484], 00:14:02.293 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15401], 00:14:02.293 | 70.00th=[15664], 80.00th=[18482], 90.00th=[19006], 95.00th=[19268], 00:14:02.293 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:14:02.293 | 99.99th=[19530] 00:14:02.293 bw ( KiB/s): min=16224, max=16544, per=16.50%, avg=16384.00, stdev=226.27, samples=2 00:14:02.293 iops : min= 4056, max= 4136, avg=4096.00, stdev=56.57, samples=2 00:14:02.293 lat (msec) : 4=0.01%, 10=0.52%, 20=99.13%, 50=0.34% 00:14:02.293 cpu : usr=1.10%, sys=3.89%, ctx=1586, majf=0, minf=1 00:14:02.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:02.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:02.293 issued rwts: total=3843,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:02.293 job3: (groupid=0, jobs=1): err= 0: pid=896086: Fri Jun 7 23:04:54 2024 00:14:02.293 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:14:02.293 slat (nsec): min=1429, max=5048.2k, avg=78532.34, stdev=308120.51 00:14:02.293 clat (usec): min=6753, max=20380, avg=10261.45, stdev=3742.49 00:14:02.293 lat (usec): min=6761, max=21697, avg=10339.99, stdev=3758.95 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[ 7439], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8455], 00:14:02.293 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8848], 00:14:02.293 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[19006], 95.00th=[19268], 00:14:02.293 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:14:02.293 | 99.99th=[20317] 00:14:02.293 write: IOPS=6588, BW=25.7MiB/s (27.0MB/s)(25.8MiB/1004msec); 0 zone resets 00:14:02.293 slat (nsec): min=1871, max=2502.5k, avg=75560.95, stdev=291830.87 00:14:02.293 clat (usec): min=2059, max=19357, avg=9671.46, stdev=3581.60 00:14:02.293 lat (usec): min=3434, max=19360, avg=9747.02, stdev=3598.03 00:14:02.293 clat percentiles (usec): 00:14:02.293 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[ 8094], 00:14:02.293 | 30.00th=[ 8225], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8455], 00:14:02.293 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[18482], 95.00th=[18744], 00:14:02.293 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:14:02.293 | 99.99th=[19268] 00:14:02.293 bw ( KiB/s): min=21408, max=30488, per=26.14%, avg=25948.00, stdev=6420.53, samples=2 00:14:02.293 iops : min= 5352, max= 7622, avg=6487.00, stdev=1605.13, samples=2 00:14:02.293 lat (msec) : 4=0.16%, 10=84.80%, 20=14.89%, 50=0.15% 00:14:02.293 cpu : usr=2.49%, sys=4.49%, ctx=1316, majf=0, minf=1 00:14:02.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:02.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:02.293 issued rwts: total=6144,6615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:02.293 00:14:02.293 Run status group 0 (all jobs): 00:14:02.293 READ: bw=91.5MiB/s (96.0MB/s), 15.0MiB/s-30.8MiB/s (15.7MB/s-32.3MB/s), io=91.9MiB (96.4MB), run=1001-1004msec 00:14:02.293 WRITE: bw=97.0MiB/s (102MB/s), 15.9MiB/s-32.0MiB/s (16.7MB/s-33.5MB/s), io=97.3MiB (102MB), run=1001-1004msec 00:14:02.293 00:14:02.293 Disk stats (read/write): 00:14:02.293 nvme0n1: ios=6104/6144, merge=0/0, ticks=17649/16875, in_queue=34524, util=84.37% 00:14:02.293 nvme0n2: ios=5120/5391, merge=0/0, ticks=16646/16649, in_queue=33295, util=85.12% 00:14:02.293 nvme0n3: ios=3217/3584, merge=0/0, ticks=16404/17271, in_queue=33675, util=88.27% 00:14:02.293 nvme0n4: ios=5632/5975, merge=0/0, ticks=16514/16684, in_queue=33198, util=89.40% 00:14:02.293 23:04:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:02.293 23:04:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=896287 00:14:02.293 23:04:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:02.293 23:04:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:02.293 [global] 00:14:02.293 thread=1 00:14:02.293 invalidate=1 00:14:02.293 rw=read 00:14:02.293 time_based=1 00:14:02.293 runtime=10 00:14:02.293 ioengine=libaio 00:14:02.293 direct=1 00:14:02.293 bs=4096 00:14:02.293 iodepth=1 00:14:02.293 norandommap=1 00:14:02.293 numjobs=1 00:14:02.293 00:14:02.293 [job0] 00:14:02.293 filename=/dev/nvme0n1 00:14:02.293 [job1] 00:14:02.293 filename=/dev/nvme0n2 00:14:02.293 [job2] 00:14:02.293 filename=/dev/nvme0n3 00:14:02.293 [job3] 00:14:02.293 filename=/dev/nvme0n4 00:14:02.293 Could not set queue depth (nvme0n1) 00:14:02.293 Could not set queue depth (nvme0n2) 00:14:02.293 Could not set queue depth (nvme0n3) 00:14:02.293 Could not set queue depth (nvme0n4) 00:14:02.293 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.293 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.293 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.293 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:02.293 fio-3.35 00:14:02.293 Starting 4 threads 00:14:05.570 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:05.570 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:05.570 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=90902528, buflen=4096 00:14:05.570 fio: pid=896460, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:05.570 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=96403456, buflen=4096 00:14:05.570 fio: pid=896459, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:05.570 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.570 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:05.570 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=50307072, buflen=4096 00:14:05.570 fio: pid=896457, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:05.570 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.570 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:05.828 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=64274432, buflen=4096 00:14:05.828 fio: pid=896458, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:05.828 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.828 23:04:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:05.828 00:14:05.828 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=896457: Fri Jun 7 23:04:57 2024 00:14:05.828 read: IOPS=9389, BW=36.7MiB/s (38.5MB/s)(112MiB/3053msec) 00:14:05.828 slat (usec): min=5, max=29544, avg= 9.48, stdev=208.14 00:14:05.828 clat (usec): min=48, max=10769, avg=94.98, stdev=88.17 00:14:05.828 lat (usec): min=54, max=29648, avg=104.45, stdev=226.23 00:14:05.828 clat percentiles (usec): 00:14:05.828 | 1.00th=[ 58], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:14:05.828 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 88], 00:14:05.828 | 70.00th=[ 96], 80.00th=[ 119], 90.00th=[ 131], 95.00th=[ 139], 00:14:05.828 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 194], 00:14:05.828 | 99.99th=[ 4490] 00:14:05.828 bw ( KiB/s): min=30008, max=44472, per=29.91%, avg=39052.80, stdev=5553.00, samples=5 00:14:05.828 iops : min= 7502, max=11118, avg=9763.20, stdev=1388.25, samples=5 00:14:05.828 lat (usec) : 50=0.04%, 100=71.80%, 250=28.13%, 500=0.01%, 750=0.01% 00:14:05.828 lat (msec) : 10=0.01%, 20=0.01% 00:14:05.828 cpu : usr=2.75%, sys=10.45%, ctx=28671, majf=0, minf=1 00:14:05.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.828 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.828 issued rwts: total=28667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.828 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=896458: Fri Jun 7 23:04:57 2024 00:14:05.828 read: IOPS=9833, BW=38.4MiB/s (40.3MB/s)(125MiB/3262msec) 00:14:05.828 slat (usec): min=5, max=16900, avg= 9.32, stdev=156.29 00:14:05.828 clat (usec): min=40, max=20702, avg=90.58, stdev=142.37 00:14:05.828 lat (usec): min=55, max=20709, avg=99.90, stdev=211.41 00:14:05.828 clat percentiles (usec): 00:14:05.828 | 1.00th=[ 55], 5.00th=[ 60], 10.00th=[ 71], 20.00th=[ 75], 00:14:05.828 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 85], 00:14:05.828 | 70.00th=[ 90], 80.00th=[ 110], 90.00th=[ 127], 95.00th=[ 135], 00:14:05.828 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 188], 00:14:05.828 | 99.99th=[ 4490] 00:14:05.828 bw ( KiB/s): min=30024, max=45872, per=29.93%, avg=39073.33, stdev=6641.31, samples=6 00:14:05.828 iops : min= 7506, max=11468, avg=9768.33, stdev=1660.33, samples=6 00:14:05.828 lat (usec) : 50=0.04%, 100=76.48%, 250=23.44%, 500=0.01%, 750=0.01% 00:14:05.828 lat (msec) : 10=0.01%, 20=0.01%, 50=0.01% 00:14:05.828 cpu : usr=2.79%, sys=11.38%, ctx=32083, majf=0, minf=1 00:14:05.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.828 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.828 issued rwts: total=32077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.828 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=896459: Fri Jun 7 23:04:57 2024 00:14:05.828 read: IOPS=8215, BW=32.1MiB/s (33.6MB/s)(91.9MiB/2865msec) 00:14:05.828 slat (usec): min=6, max=12756, avg= 8.14, stdev=113.66 00:14:05.828 clat (usec): min=62, max=8572, avg=112.18, stdev=67.82 00:14:05.828 lat (usec): min=70, max=12842, avg=120.32, stdev=132.24 00:14:05.828 clat percentiles (usec): 00:14:05.828 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 90], 00:14:05.828 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 99], 60.00th=[ 104], 00:14:05.828 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:14:05.828 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 208], 99.95th=[ 215], 00:14:05.828 | 99.99th=[ 322] 00:14:05.828 bw ( KiB/s): min=26368, max=39456, per=24.94%, avg=32558.40, stdev=6381.06, samples=5 00:14:05.828 iops : min= 6592, max= 9864, avg=8139.60, stdev=1595.27, samples=5 00:14:05.829 lat (usec) : 100=53.59%, 250=46.38%, 500=0.02% 00:14:05.829 lat (msec) : 10=0.01% 00:14:05.829 cpu : usr=2.37%, sys=9.64%, ctx=23539, majf=0, minf=1 00:14:05.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.829 issued rwts: total=23537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.829 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=896460: Fri Jun 7 23:04:57 2024 00:14:05.829 read: IOPS=8201, BW=32.0MiB/s (33.6MB/s)(86.7MiB/2706msec) 00:14:05.829 slat (nsec): min=6361, max=33390, avg=7189.70, stdev=907.21 00:14:05.829 clat (usec): min=67, max=319, avg=112.28, stdev=29.33 00:14:05.829 lat (usec): min=74, max=325, avg=119.47, stdev=29.38 00:14:05.829 clat percentiles (usec): 00:14:05.829 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:14:05.829 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 111], 00:14:05.829 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 157], 00:14:05.829 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 208], 99.95th=[ 215], 00:14:05.829 | 99.99th=[ 277] 00:14:05.829 bw ( KiB/s): min=26368, max=41304, per=25.04%, avg=32694.40, stdev=7405.67, samples=5 00:14:05.829 iops : min= 6592, max=10326, avg=8173.60, stdev=1851.42, samples=5 00:14:05.829 lat (usec) : 100=53.88%, 250=46.10%, 500=0.01% 00:14:05.829 cpu : usr=2.11%, sys=9.98%, ctx=22194, majf=0, minf=2 00:14:05.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.829 issued rwts: total=22194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.829 00:14:05.829 Run status group 0 (all jobs): 00:14:05.829 READ: bw=127MiB/s (134MB/s), 32.0MiB/s-38.4MiB/s (33.6MB/s-40.3MB/s), io=416MiB (436MB), run=2706-3262msec 00:14:05.829 00:14:05.829 Disk stats (read/write): 00:14:05.829 nvme0n1: ios=27048/0, merge=0/0, ticks=2386/0, in_queue=2386, util=94.32% 00:14:05.829 nvme0n2: ios=30205/0, merge=0/0, ticks=2603/0, in_queue=2603, util=94.40% 00:14:05.829 nvme0n3: ios=23536/0, merge=0/0, ticks=2485/0, in_queue=2485, util=95.82% 00:14:05.829 nvme0n4: ios=21571/0, merge=0/0, ticks=2302/0, in_queue=2302, util=96.49% 00:14:06.086 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.087 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:06.344 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.344 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:06.344 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.344 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:06.602 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.602 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:06.859 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:06.859 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 896287 00:14:06.859 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:06.859 23:04:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:07.791 nvmf hotplug test: fio failed as expected 00:14:07.791 23:04:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:08.049 rmmod nvme_rdma 00:14:08.049 rmmod nvme_fabrics 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 893388 ']' 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 893388 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 893388 ']' 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 893388 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 893388 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 893388' 00:14:08.049 killing process with pid 893388 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 893388 00:14:08.049 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 893388 00:14:08.307 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.307 23:05:00 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:08.307 00:14:08.307 real 0m25.595s 00:14:08.307 user 1m51.990s 00:14:08.307 sys 0m8.833s 00:14:08.307 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:08.307 23:05:00 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.307 ************************************ 00:14:08.307 END TEST nvmf_fio_target 00:14:08.307 ************************************ 00:14:08.307 23:05:00 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:08.307 23:05:00 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:08.307 23:05:00 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:08.307 23:05:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:08.307 ************************************ 00:14:08.307 START TEST nvmf_bdevio 00:14:08.307 ************************************ 00:14:08.307 23:05:00 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:08.307 * Looking for test storage... 00:14:08.568 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.568 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.569 23:05:00 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:15.120 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:15.120 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:15.121 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:15.121 Found net devices under 0000:da:00.0: mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:15.121 Found net devices under 0000:da:00.1: mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:15.121 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.121 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:15.121 altname enp218s0f0np0 00:14:15.121 altname ens818f0np0 00:14:15.121 inet 192.168.100.8/24 scope global mlx_0_0 00:14:15.121 valid_lft forever preferred_lft forever 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:15.121 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.121 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:15.121 altname enp218s0f1np1 00:14:15.121 altname ens818f1np1 00:14:15.121 inet 192.168.100.9/24 scope global mlx_0_1 00:14:15.121 valid_lft forever preferred_lft forever 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.121 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:15.121 192.168.100.9' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:15.122 192.168.100.9' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:15.122 192.168.100.9' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=900764 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 900764 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 900764 ']' 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:15.122 23:05:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 [2024-06-07 23:05:06.374223] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:14:15.122 [2024-06-07 23:05:06.374269] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.122 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.122 [2024-06-07 23:05:06.434232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.122 [2024-06-07 23:05:06.506772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.122 [2024-06-07 23:05:06.506815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.122 [2024-06-07 23:05:06.506822] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.122 [2024-06-07 23:05:06.506829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.122 [2024-06-07 23:05:06.506833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.122 [2024-06-07 23:05:06.506955] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:14:15.122 [2024-06-07 23:05:06.507455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:14:15.122 [2024-06-07 23:05:06.507546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.122 [2024-06-07 23:05:06.507547] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 [2024-06-07 23:05:07.229354] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11182b0/0x111c7a0) succeed. 00:14:15.122 [2024-06-07 23:05:07.238486] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11198f0/0x115de30) succeed. 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 Malloc0 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.122 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.379 [2024-06-07 23:05:07.400618] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:15.379 { 00:14:15.379 "params": { 00:14:15.379 "name": "Nvme$subsystem", 00:14:15.379 "trtype": "$TEST_TRANSPORT", 00:14:15.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.379 "adrfam": "ipv4", 00:14:15.379 "trsvcid": "$NVMF_PORT", 00:14:15.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.379 "hdgst": ${hdgst:-false}, 00:14:15.379 "ddgst": ${ddgst:-false} 00:14:15.379 }, 00:14:15.379 "method": "bdev_nvme_attach_controller" 00:14:15.379 } 00:14:15.379 EOF 00:14:15.379 )") 00:14:15.379 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:15.380 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:15.380 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:15.380 23:05:07 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:15.380 "params": { 00:14:15.380 "name": "Nvme1", 00:14:15.380 "trtype": "rdma", 00:14:15.380 "traddr": "192.168.100.8", 00:14:15.380 "adrfam": "ipv4", 00:14:15.380 "trsvcid": "4420", 00:14:15.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.380 "hdgst": false, 00:14:15.380 "ddgst": false 00:14:15.380 }, 00:14:15.380 "method": "bdev_nvme_attach_controller" 00:14:15.380 }' 00:14:15.380 [2024-06-07 23:05:07.448487] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:14:15.380 [2024-06-07 23:05:07.448532] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901014 ] 00:14:15.380 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.380 [2024-06-07 23:05:07.510211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.380 [2024-06-07 23:05:07.586352] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.380 [2024-06-07 23:05:07.586447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.380 [2024-06-07 23:05:07.586449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.637 I/O targets: 00:14:15.637 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:15.637 00:14:15.637 00:14:15.637 CUnit - A unit testing framework for C - Version 2.1-3 00:14:15.637 http://cunit.sourceforge.net/ 00:14:15.637 00:14:15.637 00:14:15.637 Suite: bdevio tests on: Nvme1n1 00:14:15.637 Test: blockdev write read block ...passed 00:14:15.637 Test: blockdev write zeroes read block ...passed 00:14:15.637 Test: blockdev write zeroes read no split ...passed 00:14:15.637 Test: blockdev write zeroes read split ...passed 00:14:15.637 Test: blockdev write zeroes read split partial ...passed 00:14:15.637 Test: blockdev reset ...[2024-06-07 23:05:07.796048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:15.637 [2024-06-07 23:05:07.818996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:15.637 [2024-06-07 23:05:07.845261] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:15.637 passed 00:14:15.637 Test: blockdev write read 8 blocks ...passed 00:14:15.637 Test: blockdev write read size > 128k ...passed 00:14:15.637 Test: blockdev write read invalid size ...passed 00:14:15.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:15.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:15.637 Test: blockdev write read max offset ...passed 00:14:15.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:15.637 Test: blockdev writev readv 8 blocks ...passed 00:14:15.637 Test: blockdev writev readv 30 x 1block ...passed 00:14:15.637 Test: blockdev writev readv block ...passed 00:14:15.637 Test: blockdev writev readv size > 128k ...passed 00:14:15.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:15.637 Test: blockdev comparev and writev ...[2024-06-07 23:05:07.848196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.848803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.637 [2024-06-07 23:05:07.848809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:15.637 passed 00:14:15.637 Test: blockdev nvme passthru rw ...passed 00:14:15.637 Test: blockdev nvme passthru vendor specific ...[2024-06-07 23:05:07.849073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:15.637 [2024-06-07 23:05:07.849084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.849129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:15.637 [2024-06-07 23:05:07.849136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.849184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:15.637 [2024-06-07 23:05:07.849191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:15.637 [2024-06-07 23:05:07.849230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:15.637 [2024-06-07 23:05:07.849236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:15.637 passed 00:14:15.637 Test: blockdev nvme admin passthru ...passed 00:14:15.637 Test: blockdev copy ...passed 00:14:15.637 00:14:15.637 Run Summary: Type Total Ran Passed Failed Inactive 00:14:15.637 suites 1 1 n/a 0 0 00:14:15.637 tests 23 23 23 0 0 00:14:15.637 asserts 152 152 152 0 n/a 00:14:15.637 00:14:15.637 Elapsed time = 0.173 seconds 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:15.896 rmmod nvme_rdma 00:14:15.896 rmmod nvme_fabrics 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 900764 ']' 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 900764 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 900764 ']' 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 900764 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 900764 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 900764' 00:14:15.896 killing process with pid 900764 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 900764 00:14:15.896 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 900764 00:14:16.153 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.153 23:05:08 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:16.153 00:14:16.153 real 0m7.912s 00:14:16.153 user 0m10.319s 00:14:16.153 sys 0m4.837s 00:14:16.154 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:16.154 23:05:08 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:16.154 ************************************ 00:14:16.154 END TEST nvmf_bdevio 00:14:16.154 ************************************ 00:14:16.411 23:05:08 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:16.411 23:05:08 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:16.411 23:05:08 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:16.411 23:05:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:16.411 ************************************ 00:14:16.411 START TEST nvmf_auth_target 00:14:16.411 ************************************ 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:16.411 * Looking for test storage... 00:14:16.411 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.411 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.412 23:05:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:22.976 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:22.976 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:22.977 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:22.977 Found net devices under 0000:da:00.0: mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:22.977 Found net devices under 0000:da:00.1: mlx_0_1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:22.977 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:22.977 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:22.977 altname enp218s0f0np0 00:14:22.977 altname ens818f0np0 00:14:22.977 inet 192.168.100.8/24 scope global mlx_0_0 00:14:22.977 valid_lft forever preferred_lft forever 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:22.977 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:22.977 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:22.977 altname enp218s0f1np1 00:14:22.977 altname ens818f1np1 00:14:22.977 inet 192.168.100.9/24 scope global mlx_0_1 00:14:22.977 valid_lft forever preferred_lft forever 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:22.977 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:22.978 192.168.100.9' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:22.978 192.168.100.9' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:22.978 192.168.100.9' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=904589 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 904589 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 904589 ']' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:22.978 23:05:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=904837 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed214dd7988728ef21a78873941fd5a24259d7c417eb5f1a 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aP2 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed214dd7988728ef21a78873941fd5a24259d7c417eb5f1a 0 00:14:23.237 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed214dd7988728ef21a78873941fd5a24259d7c417eb5f1a 0 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed214dd7988728ef21a78873941fd5a24259d7c417eb5f1a 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aP2 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aP2 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.aP2 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d843db4eb771f83fb503723f2274719c8b0a0227db7b47efc9850d5d78263249 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VvF 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d843db4eb771f83fb503723f2274719c8b0a0227db7b47efc9850d5d78263249 3 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d843db4eb771f83fb503723f2274719c8b0a0227db7b47efc9850d5d78263249 3 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d843db4eb771f83fb503723f2274719c8b0a0227db7b47efc9850d5d78263249 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VvF 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VvF 00:14:23.238 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.VvF 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d732f0ed941197bcf30094a7507662fc 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hXw 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d732f0ed941197bcf30094a7507662fc 1 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d732f0ed941197bcf30094a7507662fc 1 00:14:23.497 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d732f0ed941197bcf30094a7507662fc 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hXw 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hXw 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.hXw 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40df020dd66d32115444cb2bc3dac30c4f7a56adf49107ff 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jbz 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40df020dd66d32115444cb2bc3dac30c4f7a56adf49107ff 2 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40df020dd66d32115444cb2bc3dac30c4f7a56adf49107ff 2 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40df020dd66d32115444cb2bc3dac30c4f7a56adf49107ff 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jbz 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jbz 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.jbz 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f2951a0f930c71bfeaaa2c72f978f13396bc29ed9efaf88 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pyO 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f2951a0f930c71bfeaaa2c72f978f13396bc29ed9efaf88 2 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f2951a0f930c71bfeaaa2c72f978f13396bc29ed9efaf88 2 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f2951a0f930c71bfeaaa2c72f978f13396bc29ed9efaf88 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pyO 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pyO 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.pyO 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4a7c3282a0a791f6adeea6e5ce5e995 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.J0x 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4a7c3282a0a791f6adeea6e5ce5e995 1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4a7c3282a0a791f6adeea6e5ce5e995 1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4a7c3282a0a791f6adeea6e5ce5e995 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.J0x 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.J0x 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.J0x 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=22ecccf595fd3ab90b62a83ba155c473e47efbbec4a7f2b94b3abc4f217121f3 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4ut 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 22ecccf595fd3ab90b62a83ba155c473e47efbbec4a7f2b94b3abc4f217121f3 3 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 22ecccf595fd3ab90b62a83ba155c473e47efbbec4a7f2b94b3abc4f217121f3 3 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=22ecccf595fd3ab90b62a83ba155c473e47efbbec4a7f2b94b3abc4f217121f3 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:23.498 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4ut 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4ut 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.4ut 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 904589 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 904589 ']' 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 904837 /var/tmp/host.sock 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 904837 ']' 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:23.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:23.757 23:05:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aP2 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aP2 00:14:24.018 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aP2 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.VvF ]] 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VvF 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VvF 00:14:24.290 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VvF 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hXw 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hXw 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hXw 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.jbz ]] 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jbz 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jbz 00:14:24.562 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jbz 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pyO 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pyO 00:14:24.820 23:05:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pyO 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.J0x ]] 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J0x 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J0x 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.J0x 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4ut 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4ut 00:14:25.078 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4ut 00:14:25.335 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:25.336 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:25.336 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.336 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.336 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.336 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.593 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.851 00:14:25.851 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.851 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.851 23:05:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.851 { 00:14:25.851 "cntlid": 1, 00:14:25.851 "qid": 0, 00:14:25.851 "state": "enabled", 00:14:25.851 "listen_address": { 00:14:25.851 "trtype": "RDMA", 00:14:25.851 "adrfam": "IPv4", 00:14:25.851 "traddr": "192.168.100.8", 00:14:25.851 "trsvcid": "4420" 00:14:25.851 }, 00:14:25.851 "peer_address": { 00:14:25.851 "trtype": "RDMA", 00:14:25.851 "adrfam": "IPv4", 00:14:25.851 "traddr": "192.168.100.8", 00:14:25.851 "trsvcid": "53902" 00:14:25.851 }, 00:14:25.851 "auth": { 00:14:25.851 "state": "completed", 00:14:25.851 "digest": "sha256", 00:14:25.851 "dhgroup": "null" 00:14:25.851 } 00:14:25.851 } 00:14:25.851 ]' 00:14:25.851 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.109 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.366 23:05:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.931 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.189 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.447 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.447 23:05:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.705 { 00:14:27.705 "cntlid": 3, 00:14:27.705 "qid": 0, 00:14:27.705 "state": "enabled", 00:14:27.705 "listen_address": { 00:14:27.705 "trtype": "RDMA", 00:14:27.705 "adrfam": "IPv4", 00:14:27.705 "traddr": "192.168.100.8", 00:14:27.705 "trsvcid": "4420" 00:14:27.705 }, 00:14:27.705 "peer_address": { 00:14:27.705 "trtype": "RDMA", 00:14:27.705 "adrfam": "IPv4", 00:14:27.705 "traddr": "192.168.100.8", 00:14:27.705 "trsvcid": "46096" 00:14:27.705 }, 00:14:27.705 "auth": { 00:14:27.705 "state": "completed", 00:14:27.705 "digest": "sha256", 00:14:27.705 "dhgroup": "null" 00:14:27.705 } 00:14:27.705 } 00:14:27.705 ]' 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.705 23:05:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.962 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.527 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.784 23:05:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.042 00:14:29.042 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.042 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.042 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.300 { 00:14:29.300 "cntlid": 5, 00:14:29.300 "qid": 0, 00:14:29.300 "state": "enabled", 00:14:29.300 "listen_address": { 00:14:29.300 "trtype": "RDMA", 00:14:29.300 "adrfam": "IPv4", 00:14:29.300 "traddr": "192.168.100.8", 00:14:29.300 "trsvcid": "4420" 00:14:29.300 }, 00:14:29.300 "peer_address": { 00:14:29.300 "trtype": "RDMA", 00:14:29.300 "adrfam": "IPv4", 00:14:29.300 "traddr": "192.168.100.8", 00:14:29.300 "trsvcid": "44564" 00:14:29.300 }, 00:14:29.300 "auth": { 00:14:29.300 "state": "completed", 00:14:29.300 "digest": "sha256", 00:14:29.300 "dhgroup": "null" 00:14:29.300 } 00:14:29.300 } 00:14:29.300 ]' 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.300 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.558 23:05:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:14:30.123 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.123 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:30.123 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.123 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.123 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.123 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.124 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.124 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.381 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.382 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.639 00:14:30.639 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.639 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.639 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.899 { 00:14:30.899 "cntlid": 7, 00:14:30.899 "qid": 0, 00:14:30.899 "state": "enabled", 00:14:30.899 "listen_address": { 00:14:30.899 "trtype": "RDMA", 00:14:30.899 "adrfam": "IPv4", 00:14:30.899 "traddr": "192.168.100.8", 00:14:30.899 "trsvcid": "4420" 00:14:30.899 }, 00:14:30.899 "peer_address": { 00:14:30.899 "trtype": "RDMA", 00:14:30.899 "adrfam": "IPv4", 00:14:30.899 "traddr": "192.168.100.8", 00:14:30.899 "trsvcid": "45956" 00:14:30.899 }, 00:14:30.899 "auth": { 00:14:30.899 "state": "completed", 00:14:30.899 "digest": "sha256", 00:14:30.899 "dhgroup": "null" 00:14:30.899 } 00:14:30.899 } 00:14:30.899 ]' 00:14:30.899 23:05:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.899 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.158 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:14:31.723 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.723 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:31.723 23:05:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.723 23:05:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.723 23:05:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.981 23:05:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.981 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.239 00:14:32.239 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.239 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.239 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.497 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.497 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.497 23:05:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:32.497 23:05:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.498 { 00:14:32.498 "cntlid": 9, 00:14:32.498 "qid": 0, 00:14:32.498 "state": "enabled", 00:14:32.498 "listen_address": { 00:14:32.498 "trtype": "RDMA", 00:14:32.498 "adrfam": "IPv4", 00:14:32.498 "traddr": "192.168.100.8", 00:14:32.498 "trsvcid": "4420" 00:14:32.498 }, 00:14:32.498 "peer_address": { 00:14:32.498 "trtype": "RDMA", 00:14:32.498 "adrfam": "IPv4", 00:14:32.498 "traddr": "192.168.100.8", 00:14:32.498 "trsvcid": "57153" 00:14:32.498 }, 00:14:32.498 "auth": { 00:14:32.498 "state": "completed", 00:14:32.498 "digest": "sha256", 00:14:32.498 "dhgroup": "ffdhe2048" 00:14:32.498 } 00:14:32.498 } 00:14:32.498 ]' 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.498 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.755 23:05:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:14:33.320 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.578 23:05:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.836 00:14:33.836 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.836 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.836 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.094 { 00:14:34.094 "cntlid": 11, 00:14:34.094 "qid": 0, 00:14:34.094 "state": "enabled", 00:14:34.094 "listen_address": { 00:14:34.094 "trtype": "RDMA", 00:14:34.094 "adrfam": "IPv4", 00:14:34.094 "traddr": "192.168.100.8", 00:14:34.094 "trsvcid": "4420" 00:14:34.094 }, 00:14:34.094 "peer_address": { 00:14:34.094 "trtype": "RDMA", 00:14:34.094 "adrfam": "IPv4", 00:14:34.094 "traddr": "192.168.100.8", 00:14:34.094 "trsvcid": "48421" 00:14:34.094 }, 00:14:34.094 "auth": { 00:14:34.094 "state": "completed", 00:14:34.094 "digest": "sha256", 00:14:34.094 "dhgroup": "ffdhe2048" 00:14:34.094 } 00:14:34.094 } 00:14:34.094 ]' 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.094 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.352 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.352 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.352 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.352 23:05:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:14:34.917 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.175 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.433 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.433 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.691 { 00:14:35.691 "cntlid": 13, 00:14:35.691 "qid": 0, 00:14:35.691 "state": "enabled", 00:14:35.691 "listen_address": { 00:14:35.691 "trtype": "RDMA", 00:14:35.691 "adrfam": "IPv4", 00:14:35.691 "traddr": "192.168.100.8", 00:14:35.691 "trsvcid": "4420" 00:14:35.691 }, 00:14:35.691 "peer_address": { 00:14:35.691 "trtype": "RDMA", 00:14:35.691 "adrfam": "IPv4", 00:14:35.691 "traddr": "192.168.100.8", 00:14:35.691 "trsvcid": "39649" 00:14:35.691 }, 00:14:35.691 "auth": { 00:14:35.691 "state": "completed", 00:14:35.691 "digest": "sha256", 00:14:35.691 "dhgroup": "ffdhe2048" 00:14:35.691 } 00:14:35.691 } 00:14:35.691 ]' 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.691 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:35.949 23:05:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.949 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.949 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.949 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.949 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:36.882 23:05:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.882 23:05:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.883 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.883 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.141 00:14:37.141 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.141 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.141 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.399 { 00:14:37.399 "cntlid": 15, 00:14:37.399 "qid": 0, 00:14:37.399 "state": "enabled", 00:14:37.399 "listen_address": { 00:14:37.399 "trtype": "RDMA", 00:14:37.399 "adrfam": "IPv4", 00:14:37.399 "traddr": "192.168.100.8", 00:14:37.399 "trsvcid": "4420" 00:14:37.399 }, 00:14:37.399 "peer_address": { 00:14:37.399 "trtype": "RDMA", 00:14:37.399 "adrfam": "IPv4", 00:14:37.399 "traddr": "192.168.100.8", 00:14:37.399 "trsvcid": "53793" 00:14:37.399 }, 00:14:37.399 "auth": { 00:14:37.399 "state": "completed", 00:14:37.399 "digest": "sha256", 00:14:37.399 "dhgroup": "ffdhe2048" 00:14:37.399 } 00:14:37.399 } 00:14:37.399 ]' 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.399 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.657 23:05:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:14:38.222 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.480 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.738 00:14:38.738 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.738 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.738 23:05:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.996 { 00:14:38.996 "cntlid": 17, 00:14:38.996 "qid": 0, 00:14:38.996 "state": "enabled", 00:14:38.996 "listen_address": { 00:14:38.996 "trtype": "RDMA", 00:14:38.996 "adrfam": "IPv4", 00:14:38.996 "traddr": "192.168.100.8", 00:14:38.996 "trsvcid": "4420" 00:14:38.996 }, 00:14:38.996 "peer_address": { 00:14:38.996 "trtype": "RDMA", 00:14:38.996 "adrfam": "IPv4", 00:14:38.996 "traddr": "192.168.100.8", 00:14:38.996 "trsvcid": "33083" 00:14:38.996 }, 00:14:38.996 "auth": { 00:14:38.996 "state": "completed", 00:14:38.996 "digest": "sha256", 00:14:38.996 "dhgroup": "ffdhe3072" 00:14:38.996 } 00:14:38.996 } 00:14:38.996 ]' 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:38.996 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.254 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.254 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.254 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.254 23:05:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:14:39.818 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.075 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.331 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.589 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.589 { 00:14:40.589 "cntlid": 19, 00:14:40.589 "qid": 0, 00:14:40.589 "state": "enabled", 00:14:40.589 "listen_address": { 00:14:40.589 "trtype": "RDMA", 00:14:40.589 "adrfam": "IPv4", 00:14:40.589 "traddr": "192.168.100.8", 00:14:40.589 "trsvcid": "4420" 00:14:40.589 }, 00:14:40.589 "peer_address": { 00:14:40.589 "trtype": "RDMA", 00:14:40.589 "adrfam": "IPv4", 00:14:40.589 "traddr": "192.168.100.8", 00:14:40.589 "trsvcid": "44001" 00:14:40.589 }, 00:14:40.589 "auth": { 00:14:40.589 "state": "completed", 00:14:40.589 "digest": "sha256", 00:14:40.589 "dhgroup": "ffdhe3072" 00:14:40.589 } 00:14:40.589 } 00:14:40.589 ]' 00:14:40.589 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.848 23:05:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.107 23:05:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.673 23:05:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.932 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.190 00:14:42.190 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.190 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.190 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.449 { 00:14:42.449 "cntlid": 21, 00:14:42.449 "qid": 0, 00:14:42.449 "state": "enabled", 00:14:42.449 "listen_address": { 00:14:42.449 "trtype": "RDMA", 00:14:42.449 "adrfam": "IPv4", 00:14:42.449 "traddr": "192.168.100.8", 00:14:42.449 "trsvcid": "4420" 00:14:42.449 }, 00:14:42.449 "peer_address": { 00:14:42.449 "trtype": "RDMA", 00:14:42.449 "adrfam": "IPv4", 00:14:42.449 "traddr": "192.168.100.8", 00:14:42.449 "trsvcid": "40500" 00:14:42.449 }, 00:14:42.449 "auth": { 00:14:42.449 "state": "completed", 00:14:42.449 "digest": "sha256", 00:14:42.449 "dhgroup": "ffdhe3072" 00:14:42.449 } 00:14:42.449 } 00:14:42.449 ]' 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.449 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.707 23:05:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.322 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.591 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.849 00:14:43.849 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.849 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.849 23:05:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.106 { 00:14:44.106 "cntlid": 23, 00:14:44.106 "qid": 0, 00:14:44.106 "state": "enabled", 00:14:44.106 "listen_address": { 00:14:44.106 "trtype": "RDMA", 00:14:44.106 "adrfam": "IPv4", 00:14:44.106 "traddr": "192.168.100.8", 00:14:44.106 "trsvcid": "4420" 00:14:44.106 }, 00:14:44.106 "peer_address": { 00:14:44.106 "trtype": "RDMA", 00:14:44.106 "adrfam": "IPv4", 00:14:44.106 "traddr": "192.168.100.8", 00:14:44.106 "trsvcid": "39504" 00:14:44.106 }, 00:14:44.106 "auth": { 00:14:44.106 "state": "completed", 00:14:44.106 "digest": "sha256", 00:14:44.106 "dhgroup": "ffdhe3072" 00:14:44.106 } 00:14:44.106 } 00:14:44.106 ]' 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.106 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.363 23:05:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.927 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.185 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.443 00:14:45.443 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.443 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.443 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.701 { 00:14:45.701 "cntlid": 25, 00:14:45.701 "qid": 0, 00:14:45.701 "state": "enabled", 00:14:45.701 "listen_address": { 00:14:45.701 "trtype": "RDMA", 00:14:45.701 "adrfam": "IPv4", 00:14:45.701 "traddr": "192.168.100.8", 00:14:45.701 "trsvcid": "4420" 00:14:45.701 }, 00:14:45.701 "peer_address": { 00:14:45.701 "trtype": "RDMA", 00:14:45.701 "adrfam": "IPv4", 00:14:45.701 "traddr": "192.168.100.8", 00:14:45.701 "trsvcid": "46216" 00:14:45.701 }, 00:14:45.701 "auth": { 00:14:45.701 "state": "completed", 00:14:45.701 "digest": "sha256", 00:14:45.701 "dhgroup": "ffdhe4096" 00:14:45.701 } 00:14:45.701 } 00:14:45.701 ]' 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.701 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.702 23:05:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.959 23:05:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:14:46.525 23:05:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.783 23:05:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.783 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.042 00:14:47.042 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.042 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.042 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.302 { 00:14:47.302 "cntlid": 27, 00:14:47.302 "qid": 0, 00:14:47.302 "state": "enabled", 00:14:47.302 "listen_address": { 00:14:47.302 "trtype": "RDMA", 00:14:47.302 "adrfam": "IPv4", 00:14:47.302 "traddr": "192.168.100.8", 00:14:47.302 "trsvcid": "4420" 00:14:47.302 }, 00:14:47.302 "peer_address": { 00:14:47.302 "trtype": "RDMA", 00:14:47.302 "adrfam": "IPv4", 00:14:47.302 "traddr": "192.168.100.8", 00:14:47.302 "trsvcid": "55634" 00:14:47.302 }, 00:14:47.302 "auth": { 00:14:47.302 "state": "completed", 00:14:47.302 "digest": "sha256", 00:14:47.302 "dhgroup": "ffdhe4096" 00:14:47.302 } 00:14:47.302 } 00:14:47.302 ]' 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:47.302 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.561 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.561 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.561 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.561 23:05:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:14:48.128 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.387 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.646 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.905 00:14:48.905 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.905 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.905 23:05:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.905 { 00:14:48.905 "cntlid": 29, 00:14:48.905 "qid": 0, 00:14:48.905 "state": "enabled", 00:14:48.905 "listen_address": { 00:14:48.905 "trtype": "RDMA", 00:14:48.905 "adrfam": "IPv4", 00:14:48.905 "traddr": "192.168.100.8", 00:14:48.905 "trsvcid": "4420" 00:14:48.905 }, 00:14:48.905 "peer_address": { 00:14:48.905 "trtype": "RDMA", 00:14:48.905 "adrfam": "IPv4", 00:14:48.905 "traddr": "192.168.100.8", 00:14:48.905 "trsvcid": "35239" 00:14:48.905 }, 00:14:48.905 "auth": { 00:14:48.905 "state": "completed", 00:14:48.905 "digest": "sha256", 00:14:48.905 "dhgroup": "ffdhe4096" 00:14:48.905 } 00:14:48.905 } 00:14:48.905 ]' 00:14:48.905 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.164 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.423 23:05:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.991 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.250 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.509 00:14:50.509 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.509 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.509 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.768 { 00:14:50.768 "cntlid": 31, 00:14:50.768 "qid": 0, 00:14:50.768 "state": "enabled", 00:14:50.768 "listen_address": { 00:14:50.768 "trtype": "RDMA", 00:14:50.768 "adrfam": "IPv4", 00:14:50.768 "traddr": "192.168.100.8", 00:14:50.768 "trsvcid": "4420" 00:14:50.768 }, 00:14:50.768 "peer_address": { 00:14:50.768 "trtype": "RDMA", 00:14:50.768 "adrfam": "IPv4", 00:14:50.768 "traddr": "192.168.100.8", 00:14:50.768 "trsvcid": "52898" 00:14:50.768 }, 00:14:50.768 "auth": { 00:14:50.768 "state": "completed", 00:14:50.768 "digest": "sha256", 00:14:50.768 "dhgroup": "ffdhe4096" 00:14:50.768 } 00:14:50.768 } 00:14:50.768 ]' 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.768 23:05:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.026 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.592 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.850 23:05:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.850 23:05:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.851 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.851 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.109 00:14:52.109 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.109 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.109 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.368 { 00:14:52.368 "cntlid": 33, 00:14:52.368 "qid": 0, 00:14:52.368 "state": "enabled", 00:14:52.368 "listen_address": { 00:14:52.368 "trtype": "RDMA", 00:14:52.368 "adrfam": "IPv4", 00:14:52.368 "traddr": "192.168.100.8", 00:14:52.368 "trsvcid": "4420" 00:14:52.368 }, 00:14:52.368 "peer_address": { 00:14:52.368 "trtype": "RDMA", 00:14:52.368 "adrfam": "IPv4", 00:14:52.368 "traddr": "192.168.100.8", 00:14:52.368 "trsvcid": "46540" 00:14:52.368 }, 00:14:52.368 "auth": { 00:14:52.368 "state": "completed", 00:14:52.368 "digest": "sha256", 00:14:52.368 "dhgroup": "ffdhe6144" 00:14:52.368 } 00:14:52.368 } 00:14:52.368 ]' 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.368 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.627 23:05:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:14:53.194 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.453 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.712 23:05:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.971 00:14:53.971 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.971 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.971 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.229 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.229 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.229 23:05:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:54.229 23:05:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.229 23:05:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:54.229 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.229 { 00:14:54.229 "cntlid": 35, 00:14:54.229 "qid": 0, 00:14:54.229 "state": "enabled", 00:14:54.229 "listen_address": { 00:14:54.229 "trtype": "RDMA", 00:14:54.230 "adrfam": "IPv4", 00:14:54.230 "traddr": "192.168.100.8", 00:14:54.230 "trsvcid": "4420" 00:14:54.230 }, 00:14:54.230 "peer_address": { 00:14:54.230 "trtype": "RDMA", 00:14:54.230 "adrfam": "IPv4", 00:14:54.230 "traddr": "192.168.100.8", 00:14:54.230 "trsvcid": "39340" 00:14:54.230 }, 00:14:54.230 "auth": { 00:14:54.230 "state": "completed", 00:14:54.230 "digest": "sha256", 00:14:54.230 "dhgroup": "ffdhe6144" 00:14:54.230 } 00:14:54.230 } 00:14:54.230 ]' 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.230 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.488 23:05:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.056 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.315 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.573 00:14:55.573 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.573 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.573 23:05:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.832 { 00:14:55.832 "cntlid": 37, 00:14:55.832 "qid": 0, 00:14:55.832 "state": "enabled", 00:14:55.832 "listen_address": { 00:14:55.832 "trtype": "RDMA", 00:14:55.832 "adrfam": "IPv4", 00:14:55.832 "traddr": "192.168.100.8", 00:14:55.832 "trsvcid": "4420" 00:14:55.832 }, 00:14:55.832 "peer_address": { 00:14:55.832 "trtype": "RDMA", 00:14:55.832 "adrfam": "IPv4", 00:14:55.832 "traddr": "192.168.100.8", 00:14:55.832 "trsvcid": "56740" 00:14:55.832 }, 00:14:55.832 "auth": { 00:14:55.832 "state": "completed", 00:14:55.832 "digest": "sha256", 00:14:55.832 "dhgroup": "ffdhe6144" 00:14:55.832 } 00:14:55.832 } 00:14:55.832 ]' 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.832 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.091 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.091 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.091 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.091 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.091 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.091 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:14:57.028 23:05:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.028 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.286 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.545 23:05:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:57.546 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.546 { 00:14:57.546 "cntlid": 39, 00:14:57.546 "qid": 0, 00:14:57.546 "state": "enabled", 00:14:57.546 "listen_address": { 00:14:57.546 "trtype": "RDMA", 00:14:57.546 "adrfam": "IPv4", 00:14:57.546 "traddr": "192.168.100.8", 00:14:57.546 "trsvcid": "4420" 00:14:57.546 }, 00:14:57.546 "peer_address": { 00:14:57.546 "trtype": "RDMA", 00:14:57.546 "adrfam": "IPv4", 00:14:57.546 "traddr": "192.168.100.8", 00:14:57.546 "trsvcid": "52213" 00:14:57.546 }, 00:14:57.546 "auth": { 00:14:57.546 "state": "completed", 00:14:57.546 "digest": "sha256", 00:14:57.546 "dhgroup": "ffdhe6144" 00:14:57.546 } 00:14:57.546 } 00:14:57.546 ]' 00:14:57.546 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.546 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.546 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.546 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.546 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.805 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.805 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.805 23:05:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.805 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:14:58.372 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.631 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.890 23:05:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.149 00:14:59.149 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.149 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.149 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.408 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.408 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.408 23:05:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:59.408 23:05:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.408 23:05:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:59.408 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.408 { 00:14:59.408 "cntlid": 41, 00:14:59.408 "qid": 0, 00:14:59.408 "state": "enabled", 00:14:59.408 "listen_address": { 00:14:59.408 "trtype": "RDMA", 00:14:59.408 "adrfam": "IPv4", 00:14:59.408 "traddr": "192.168.100.8", 00:14:59.408 "trsvcid": "4420" 00:14:59.408 }, 00:14:59.408 "peer_address": { 00:14:59.408 "trtype": "RDMA", 00:14:59.408 "adrfam": "IPv4", 00:14:59.408 "traddr": "192.168.100.8", 00:14:59.408 "trsvcid": "51229" 00:14:59.408 }, 00:14:59.408 "auth": { 00:14:59.408 "state": "completed", 00:14:59.408 "digest": "sha256", 00:14:59.408 "dhgroup": "ffdhe8192" 00:14:59.408 } 00:14:59.408 } 00:14:59.408 ]' 00:14:59.409 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.409 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.409 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.409 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.409 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.667 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.667 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.667 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.667 23:05:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:00.235 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.494 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.753 23:05:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.010 00:15:01.010 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.268 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.268 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.268 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.268 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.269 { 00:15:01.269 "cntlid": 43, 00:15:01.269 "qid": 0, 00:15:01.269 "state": "enabled", 00:15:01.269 "listen_address": { 00:15:01.269 "trtype": "RDMA", 00:15:01.269 "adrfam": "IPv4", 00:15:01.269 "traddr": "192.168.100.8", 00:15:01.269 "trsvcid": "4420" 00:15:01.269 }, 00:15:01.269 "peer_address": { 00:15:01.269 "trtype": "RDMA", 00:15:01.269 "adrfam": "IPv4", 00:15:01.269 "traddr": "192.168.100.8", 00:15:01.269 "trsvcid": "34314" 00:15:01.269 }, 00:15:01.269 "auth": { 00:15:01.269 "state": "completed", 00:15:01.269 "digest": "sha256", 00:15:01.269 "dhgroup": "ffdhe8192" 00:15:01.269 } 00:15:01.269 } 00:15:01.269 ]' 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.269 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.527 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.527 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.527 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.527 23:05:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.533 23:05:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.099 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.099 { 00:15:03.099 "cntlid": 45, 00:15:03.099 "qid": 0, 00:15:03.099 "state": "enabled", 00:15:03.099 "listen_address": { 00:15:03.099 "trtype": "RDMA", 00:15:03.099 "adrfam": "IPv4", 00:15:03.099 "traddr": "192.168.100.8", 00:15:03.099 "trsvcid": "4420" 00:15:03.099 }, 00:15:03.099 "peer_address": { 00:15:03.099 "trtype": "RDMA", 00:15:03.099 "adrfam": "IPv4", 00:15:03.099 "traddr": "192.168.100.8", 00:15:03.099 "trsvcid": "55585" 00:15:03.099 }, 00:15:03.099 "auth": { 00:15:03.099 "state": "completed", 00:15:03.099 "digest": "sha256", 00:15:03.099 "dhgroup": "ffdhe8192" 00:15:03.099 } 00:15:03.099 } 00:15:03.099 ]' 00:15:03.099 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.356 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.614 23:05:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.179 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.437 23:05:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.002 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.002 { 00:15:05.002 "cntlid": 47, 00:15:05.002 "qid": 0, 00:15:05.002 "state": "enabled", 00:15:05.002 "listen_address": { 00:15:05.002 "trtype": "RDMA", 00:15:05.002 "adrfam": "IPv4", 00:15:05.002 "traddr": "192.168.100.8", 00:15:05.002 "trsvcid": "4420" 00:15:05.002 }, 00:15:05.002 "peer_address": { 00:15:05.002 "trtype": "RDMA", 00:15:05.002 "adrfam": "IPv4", 00:15:05.002 "traddr": "192.168.100.8", 00:15:05.002 "trsvcid": "51388" 00:15:05.002 }, 00:15:05.002 "auth": { 00:15:05.002 "state": "completed", 00:15:05.002 "digest": "sha256", 00:15:05.002 "dhgroup": "ffdhe8192" 00:15:05.002 } 00:15:05.002 } 00:15:05.002 ]' 00:15:05.002 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.259 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.260 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.260 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.260 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.260 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.260 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.260 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.517 23:05:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.082 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.339 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.340 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.340 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.340 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.597 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.597 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.854 { 00:15:06.854 "cntlid": 49, 00:15:06.854 "qid": 0, 00:15:06.854 "state": "enabled", 00:15:06.854 "listen_address": { 00:15:06.854 "trtype": "RDMA", 00:15:06.854 "adrfam": "IPv4", 00:15:06.854 "traddr": "192.168.100.8", 00:15:06.854 "trsvcid": "4420" 00:15:06.854 }, 00:15:06.854 "peer_address": { 00:15:06.854 "trtype": "RDMA", 00:15:06.854 "adrfam": "IPv4", 00:15:06.854 "traddr": "192.168.100.8", 00:15:06.854 "trsvcid": "54433" 00:15:06.854 }, 00:15:06.854 "auth": { 00:15:06.854 "state": "completed", 00:15:06.854 "digest": "sha384", 00:15:06.854 "dhgroup": "null" 00:15:06.854 } 00:15:06.854 } 00:15:06.854 ]' 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:06.854 23:05:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.854 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.854 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.854 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.111 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.676 23:05:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.934 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.191 00:15:08.192 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.192 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.192 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.449 { 00:15:08.449 "cntlid": 51, 00:15:08.449 "qid": 0, 00:15:08.449 "state": "enabled", 00:15:08.449 "listen_address": { 00:15:08.449 "trtype": "RDMA", 00:15:08.449 "adrfam": "IPv4", 00:15:08.449 "traddr": "192.168.100.8", 00:15:08.449 "trsvcid": "4420" 00:15:08.449 }, 00:15:08.449 "peer_address": { 00:15:08.449 "trtype": "RDMA", 00:15:08.449 "adrfam": "IPv4", 00:15:08.449 "traddr": "192.168.100.8", 00:15:08.449 "trsvcid": "56248" 00:15:08.449 }, 00:15:08.449 "auth": { 00:15:08.449 "state": "completed", 00:15:08.449 "digest": "sha384", 00:15:08.449 "dhgroup": "null" 00:15:08.449 } 00:15:08.449 } 00:15:08.449 ]' 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.449 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.707 23:06:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:09.271 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.529 23:06:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.787 00:15:09.787 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.787 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.787 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.045 { 00:15:10.045 "cntlid": 53, 00:15:10.045 "qid": 0, 00:15:10.045 "state": "enabled", 00:15:10.045 "listen_address": { 00:15:10.045 "trtype": "RDMA", 00:15:10.045 "adrfam": "IPv4", 00:15:10.045 "traddr": "192.168.100.8", 00:15:10.045 "trsvcid": "4420" 00:15:10.045 }, 00:15:10.045 "peer_address": { 00:15:10.045 "trtype": "RDMA", 00:15:10.045 "adrfam": "IPv4", 00:15:10.045 "traddr": "192.168.100.8", 00:15:10.045 "trsvcid": "39047" 00:15:10.045 }, 00:15:10.045 "auth": { 00:15:10.045 "state": "completed", 00:15:10.045 "digest": "sha384", 00:15:10.045 "dhgroup": "null" 00:15:10.045 } 00:15:10.045 } 00:15:10.045 ]' 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.045 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.302 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.302 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.302 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.302 23:06:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:11.235 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.236 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.493 00:15:11.493 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.493 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.493 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.751 { 00:15:11.751 "cntlid": 55, 00:15:11.751 "qid": 0, 00:15:11.751 "state": "enabled", 00:15:11.751 "listen_address": { 00:15:11.751 "trtype": "RDMA", 00:15:11.751 "adrfam": "IPv4", 00:15:11.751 "traddr": "192.168.100.8", 00:15:11.751 "trsvcid": "4420" 00:15:11.751 }, 00:15:11.751 "peer_address": { 00:15:11.751 "trtype": "RDMA", 00:15:11.751 "adrfam": "IPv4", 00:15:11.751 "traddr": "192.168.100.8", 00:15:11.751 "trsvcid": "37108" 00:15:11.751 }, 00:15:11.751 "auth": { 00:15:11.751 "state": "completed", 00:15:11.751 "digest": "sha384", 00:15:11.751 "dhgroup": "null" 00:15:11.751 } 00:15:11.751 } 00:15:11.751 ]' 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:11.751 23:06:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.751 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.751 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.751 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.009 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:12.574 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.831 23:06:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.831 23:06:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.089 23:06:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.089 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.089 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.089 00:15:13.089 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.089 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.089 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.346 { 00:15:13.346 "cntlid": 57, 00:15:13.346 "qid": 0, 00:15:13.346 "state": "enabled", 00:15:13.346 "listen_address": { 00:15:13.346 "trtype": "RDMA", 00:15:13.346 "adrfam": "IPv4", 00:15:13.346 "traddr": "192.168.100.8", 00:15:13.346 "trsvcid": "4420" 00:15:13.346 }, 00:15:13.346 "peer_address": { 00:15:13.346 "trtype": "RDMA", 00:15:13.346 "adrfam": "IPv4", 00:15:13.346 "traddr": "192.168.100.8", 00:15:13.346 "trsvcid": "42301" 00:15:13.346 }, 00:15:13.346 "auth": { 00:15:13.346 "state": "completed", 00:15:13.346 "digest": "sha384", 00:15:13.346 "dhgroup": "ffdhe2048" 00:15:13.346 } 00:15:13.346 } 00:15:13.346 ]' 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.346 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.603 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.603 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.603 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.603 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.603 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.603 23:06:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:14.169 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.426 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.684 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.942 00:15:14.942 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.942 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.942 23:06:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.942 { 00:15:14.942 "cntlid": 59, 00:15:14.942 "qid": 0, 00:15:14.942 "state": "enabled", 00:15:14.942 "listen_address": { 00:15:14.942 "trtype": "RDMA", 00:15:14.942 "adrfam": "IPv4", 00:15:14.942 "traddr": "192.168.100.8", 00:15:14.942 "trsvcid": "4420" 00:15:14.942 }, 00:15:14.942 "peer_address": { 00:15:14.942 "trtype": "RDMA", 00:15:14.942 "adrfam": "IPv4", 00:15:14.942 "traddr": "192.168.100.8", 00:15:14.942 "trsvcid": "44023" 00:15:14.942 }, 00:15:14.942 "auth": { 00:15:14.942 "state": "completed", 00:15:14.942 "digest": "sha384", 00:15:14.942 "dhgroup": "ffdhe2048" 00:15:14.942 } 00:15:14.942 } 00:15:14.942 ]' 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.942 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.200 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.200 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.200 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.201 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.201 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.458 23:06:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.024 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.282 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.283 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.283 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.541 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.541 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.799 { 00:15:16.799 "cntlid": 61, 00:15:16.799 "qid": 0, 00:15:16.799 "state": "enabled", 00:15:16.799 "listen_address": { 00:15:16.799 "trtype": "RDMA", 00:15:16.799 "adrfam": "IPv4", 00:15:16.799 "traddr": "192.168.100.8", 00:15:16.799 "trsvcid": "4420" 00:15:16.799 }, 00:15:16.799 "peer_address": { 00:15:16.799 "trtype": "RDMA", 00:15:16.799 "adrfam": "IPv4", 00:15:16.799 "traddr": "192.168.100.8", 00:15:16.799 "trsvcid": "50871" 00:15:16.799 }, 00:15:16.799 "auth": { 00:15:16.799 "state": "completed", 00:15:16.799 "digest": "sha384", 00:15:16.799 "dhgroup": "ffdhe2048" 00:15:16.799 } 00:15:16.799 } 00:15:16.799 ]' 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.799 23:06:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.057 23:06:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.623 23:06:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.882 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.140 00:15:18.140 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.140 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.140 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.398 { 00:15:18.398 "cntlid": 63, 00:15:18.398 "qid": 0, 00:15:18.398 "state": "enabled", 00:15:18.398 "listen_address": { 00:15:18.398 "trtype": "RDMA", 00:15:18.398 "adrfam": "IPv4", 00:15:18.398 "traddr": "192.168.100.8", 00:15:18.398 "trsvcid": "4420" 00:15:18.398 }, 00:15:18.398 "peer_address": { 00:15:18.398 "trtype": "RDMA", 00:15:18.398 "adrfam": "IPv4", 00:15:18.398 "traddr": "192.168.100.8", 00:15:18.398 "trsvcid": "38385" 00:15:18.398 }, 00:15:18.398 "auth": { 00:15:18.398 "state": "completed", 00:15:18.398 "digest": "sha384", 00:15:18.398 "dhgroup": "ffdhe2048" 00:15:18.398 } 00:15:18.398 } 00:15:18.398 ]' 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.398 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.657 23:06:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:19.223 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.482 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.741 00:15:19.741 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.741 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.741 23:06:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.000 { 00:15:20.000 "cntlid": 65, 00:15:20.000 "qid": 0, 00:15:20.000 "state": "enabled", 00:15:20.000 "listen_address": { 00:15:20.000 "trtype": "RDMA", 00:15:20.000 "adrfam": "IPv4", 00:15:20.000 "traddr": "192.168.100.8", 00:15:20.000 "trsvcid": "4420" 00:15:20.000 }, 00:15:20.000 "peer_address": { 00:15:20.000 "trtype": "RDMA", 00:15:20.000 "adrfam": "IPv4", 00:15:20.000 "traddr": "192.168.100.8", 00:15:20.000 "trsvcid": "56126" 00:15:20.000 }, 00:15:20.000 "auth": { 00:15:20.000 "state": "completed", 00:15:20.000 "digest": "sha384", 00:15:20.000 "dhgroup": "ffdhe3072" 00:15:20.000 } 00:15:20.000 } 00:15:20.000 ]' 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.000 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.259 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.259 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.259 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.259 23:06:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:20.826 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.084 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.353 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.354 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.669 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.669 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.669 { 00:15:21.669 "cntlid": 67, 00:15:21.669 "qid": 0, 00:15:21.669 "state": "enabled", 00:15:21.669 "listen_address": { 00:15:21.669 "trtype": "RDMA", 00:15:21.669 "adrfam": "IPv4", 00:15:21.669 "traddr": "192.168.100.8", 00:15:21.670 "trsvcid": "4420" 00:15:21.670 }, 00:15:21.670 "peer_address": { 00:15:21.670 "trtype": "RDMA", 00:15:21.670 "adrfam": "IPv4", 00:15:21.670 "traddr": "192.168.100.8", 00:15:21.670 "trsvcid": "58381" 00:15:21.670 }, 00:15:21.670 "auth": { 00:15:21.670 "state": "completed", 00:15:21.670 "digest": "sha384", 00:15:21.670 "dhgroup": "ffdhe3072" 00:15:21.670 } 00:15:21.670 } 00:15:21.670 ]' 00:15:21.670 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.948 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.948 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.948 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.948 23:06:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.948 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.948 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.948 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.948 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.885 23:06:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.885 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.143 00:15:23.143 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.143 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.143 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.402 { 00:15:23.402 "cntlid": 69, 00:15:23.402 "qid": 0, 00:15:23.402 "state": "enabled", 00:15:23.402 "listen_address": { 00:15:23.402 "trtype": "RDMA", 00:15:23.402 "adrfam": "IPv4", 00:15:23.402 "traddr": "192.168.100.8", 00:15:23.402 "trsvcid": "4420" 00:15:23.402 }, 00:15:23.402 "peer_address": { 00:15:23.402 "trtype": "RDMA", 00:15:23.402 "adrfam": "IPv4", 00:15:23.402 "traddr": "192.168.100.8", 00:15:23.402 "trsvcid": "37398" 00:15:23.402 }, 00:15:23.402 "auth": { 00:15:23.402 "state": "completed", 00:15:23.402 "digest": "sha384", 00:15:23.402 "dhgroup": "ffdhe3072" 00:15:23.402 } 00:15:23.402 } 00:15:23.402 ]' 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:23.402 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.661 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.661 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.661 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.661 23:06:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:24.227 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.486 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.745 23:06:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.745 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.004 { 00:15:25.004 "cntlid": 71, 00:15:25.004 "qid": 0, 00:15:25.004 "state": "enabled", 00:15:25.004 "listen_address": { 00:15:25.004 "trtype": "RDMA", 00:15:25.004 "adrfam": "IPv4", 00:15:25.004 "traddr": "192.168.100.8", 00:15:25.004 "trsvcid": "4420" 00:15:25.004 }, 00:15:25.004 "peer_address": { 00:15:25.004 "trtype": "RDMA", 00:15:25.004 "adrfam": "IPv4", 00:15:25.004 "traddr": "192.168.100.8", 00:15:25.004 "trsvcid": "60223" 00:15:25.004 }, 00:15:25.004 "auth": { 00:15:25.004 "state": "completed", 00:15:25.004 "digest": "sha384", 00:15:25.004 "dhgroup": "ffdhe3072" 00:15:25.004 } 00:15:25.004 } 00:15:25.004 ]' 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.004 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.263 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.263 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.263 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.263 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.263 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.263 23:06:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.199 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.200 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.200 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.200 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.200 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.200 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.200 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.458 00:15:26.458 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.458 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.458 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.717 { 00:15:26.717 "cntlid": 73, 00:15:26.717 "qid": 0, 00:15:26.717 "state": "enabled", 00:15:26.717 "listen_address": { 00:15:26.717 "trtype": "RDMA", 00:15:26.717 "adrfam": "IPv4", 00:15:26.717 "traddr": "192.168.100.8", 00:15:26.717 "trsvcid": "4420" 00:15:26.717 }, 00:15:26.717 "peer_address": { 00:15:26.717 "trtype": "RDMA", 00:15:26.717 "adrfam": "IPv4", 00:15:26.717 "traddr": "192.168.100.8", 00:15:26.717 "trsvcid": "47807" 00:15:26.717 }, 00:15:26.717 "auth": { 00:15:26.717 "state": "completed", 00:15:26.717 "digest": "sha384", 00:15:26.717 "dhgroup": "ffdhe4096" 00:15:26.717 } 00:15:26.717 } 00:15:26.717 ]' 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.717 23:06:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.976 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.976 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.976 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.976 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:27.910 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.911 23:06:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.911 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.170 00:15:28.170 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.170 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.170 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.429 { 00:15:28.429 "cntlid": 75, 00:15:28.429 "qid": 0, 00:15:28.429 "state": "enabled", 00:15:28.429 "listen_address": { 00:15:28.429 "trtype": "RDMA", 00:15:28.429 "adrfam": "IPv4", 00:15:28.429 "traddr": "192.168.100.8", 00:15:28.429 "trsvcid": "4420" 00:15:28.429 }, 00:15:28.429 "peer_address": { 00:15:28.429 "trtype": "RDMA", 00:15:28.429 "adrfam": "IPv4", 00:15:28.429 "traddr": "192.168.100.8", 00:15:28.429 "trsvcid": "34830" 00:15:28.429 }, 00:15:28.429 "auth": { 00:15:28.429 "state": "completed", 00:15:28.429 "digest": "sha384", 00:15:28.429 "dhgroup": "ffdhe4096" 00:15:28.429 } 00:15:28.429 } 00:15:28.429 ]' 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.429 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.687 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.687 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.687 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.687 23:06:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.622 23:06:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.881 00:15:29.881 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.881 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.881 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.140 { 00:15:30.140 "cntlid": 77, 00:15:30.140 "qid": 0, 00:15:30.140 "state": "enabled", 00:15:30.140 "listen_address": { 00:15:30.140 "trtype": "RDMA", 00:15:30.140 "adrfam": "IPv4", 00:15:30.140 "traddr": "192.168.100.8", 00:15:30.140 "trsvcid": "4420" 00:15:30.140 }, 00:15:30.140 "peer_address": { 00:15:30.140 "trtype": "RDMA", 00:15:30.140 "adrfam": "IPv4", 00:15:30.140 "traddr": "192.168.100.8", 00:15:30.140 "trsvcid": "56680" 00:15:30.140 }, 00:15:30.140 "auth": { 00:15:30.140 "state": "completed", 00:15:30.140 "digest": "sha384", 00:15:30.140 "dhgroup": "ffdhe4096" 00:15:30.140 } 00:15:30.140 } 00:15:30.140 ]' 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.140 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.399 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.399 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.399 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.399 23:06:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:30.968 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.228 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.486 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.744 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.744 { 00:15:31.744 "cntlid": 79, 00:15:31.744 "qid": 0, 00:15:31.744 "state": "enabled", 00:15:31.744 "listen_address": { 00:15:31.744 "trtype": "RDMA", 00:15:31.744 "adrfam": "IPv4", 00:15:31.744 "traddr": "192.168.100.8", 00:15:31.744 "trsvcid": "4420" 00:15:31.744 }, 00:15:31.744 "peer_address": { 00:15:31.744 "trtype": "RDMA", 00:15:31.744 "adrfam": "IPv4", 00:15:31.744 "traddr": "192.168.100.8", 00:15:31.744 "trsvcid": "38017" 00:15:31.744 }, 00:15:31.744 "auth": { 00:15:31.744 "state": "completed", 00:15:31.744 "digest": "sha384", 00:15:31.744 "dhgroup": "ffdhe4096" 00:15:31.744 } 00:15:31.744 } 00:15:31.744 ]' 00:15:31.744 23:06:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.744 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.744 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.002 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.002 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.002 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.002 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.002 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.261 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:32.829 23:06:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.829 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.088 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.346 00:15:33.346 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.346 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.346 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.605 { 00:15:33.605 "cntlid": 81, 00:15:33.605 "qid": 0, 00:15:33.605 "state": "enabled", 00:15:33.605 "listen_address": { 00:15:33.605 "trtype": "RDMA", 00:15:33.605 "adrfam": "IPv4", 00:15:33.605 "traddr": "192.168.100.8", 00:15:33.605 "trsvcid": "4420" 00:15:33.605 }, 00:15:33.605 "peer_address": { 00:15:33.605 "trtype": "RDMA", 00:15:33.605 "adrfam": "IPv4", 00:15:33.605 "traddr": "192.168.100.8", 00:15:33.605 "trsvcid": "44675" 00:15:33.605 }, 00:15:33.605 "auth": { 00:15:33.605 "state": "completed", 00:15:33.605 "digest": "sha384", 00:15:33.605 "dhgroup": "ffdhe6144" 00:15:33.605 } 00:15:33.605 } 00:15:33.605 ]' 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.605 23:06:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.863 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:34.429 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.688 23:06:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.946 23:06:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.946 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.946 23:06:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.204 00:15:35.204 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.204 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.204 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.462 { 00:15:35.462 "cntlid": 83, 00:15:35.462 "qid": 0, 00:15:35.462 "state": "enabled", 00:15:35.462 "listen_address": { 00:15:35.462 "trtype": "RDMA", 00:15:35.462 "adrfam": "IPv4", 00:15:35.462 "traddr": "192.168.100.8", 00:15:35.462 "trsvcid": "4420" 00:15:35.462 }, 00:15:35.462 "peer_address": { 00:15:35.462 "trtype": "RDMA", 00:15:35.462 "adrfam": "IPv4", 00:15:35.462 "traddr": "192.168.100.8", 00:15:35.462 "trsvcid": "46370" 00:15:35.462 }, 00:15:35.462 "auth": { 00:15:35.462 "state": "completed", 00:15:35.462 "digest": "sha384", 00:15:35.462 "dhgroup": "ffdhe6144" 00:15:35.462 } 00:15:35.462 } 00:15:35.462 ]' 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.462 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.720 23:06:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.286 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.544 23:06:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.802 00:15:36.802 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.802 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.802 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.060 { 00:15:37.060 "cntlid": 85, 00:15:37.060 "qid": 0, 00:15:37.060 "state": "enabled", 00:15:37.060 "listen_address": { 00:15:37.060 "trtype": "RDMA", 00:15:37.060 "adrfam": "IPv4", 00:15:37.060 "traddr": "192.168.100.8", 00:15:37.060 "trsvcid": "4420" 00:15:37.060 }, 00:15:37.060 "peer_address": { 00:15:37.060 "trtype": "RDMA", 00:15:37.060 "adrfam": "IPv4", 00:15:37.060 "traddr": "192.168.100.8", 00:15:37.060 "trsvcid": "50437" 00:15:37.060 }, 00:15:37.060 "auth": { 00:15:37.060 "state": "completed", 00:15:37.060 "digest": "sha384", 00:15:37.060 "dhgroup": "ffdhe6144" 00:15:37.060 } 00:15:37.060 } 00:15:37.060 ]' 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.060 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.318 23:06:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:37.885 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.143 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.401 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.659 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.659 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.918 23:06:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.918 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.918 { 00:15:38.918 "cntlid": 87, 00:15:38.918 "qid": 0, 00:15:38.918 "state": "enabled", 00:15:38.918 "listen_address": { 00:15:38.918 "trtype": "RDMA", 00:15:38.918 "adrfam": "IPv4", 00:15:38.918 "traddr": "192.168.100.8", 00:15:38.918 "trsvcid": "4420" 00:15:38.918 }, 00:15:38.918 "peer_address": { 00:15:38.918 "trtype": "RDMA", 00:15:38.918 "adrfam": "IPv4", 00:15:38.918 "traddr": "192.168.100.8", 00:15:38.918 "trsvcid": "38692" 00:15:38.918 }, 00:15:38.918 "auth": { 00:15:38.918 "state": "completed", 00:15:38.918 "digest": "sha384", 00:15:38.918 "dhgroup": "ffdhe6144" 00:15:38.918 } 00:15:38.918 } 00:15:38.918 ]' 00:15:38.918 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.918 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.918 23:06:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.918 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.918 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.918 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.918 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.918 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.176 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.744 23:06:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.002 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.003 23:06:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.003 23:06:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.003 23:06:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.003 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.003 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.570 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.570 { 00:15:40.570 "cntlid": 89, 00:15:40.570 "qid": 0, 00:15:40.570 "state": "enabled", 00:15:40.570 "listen_address": { 00:15:40.570 "trtype": "RDMA", 00:15:40.570 "adrfam": "IPv4", 00:15:40.570 "traddr": "192.168.100.8", 00:15:40.570 "trsvcid": "4420" 00:15:40.570 }, 00:15:40.570 "peer_address": { 00:15:40.570 "trtype": "RDMA", 00:15:40.570 "adrfam": "IPv4", 00:15:40.570 "traddr": "192.168.100.8", 00:15:40.570 "trsvcid": "52476" 00:15:40.570 }, 00:15:40.570 "auth": { 00:15:40.570 "state": "completed", 00:15:40.570 "digest": "sha384", 00:15:40.570 "dhgroup": "ffdhe8192" 00:15:40.570 } 00:15:40.570 } 00:15:40.570 ]' 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.570 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.875 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.875 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.875 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.875 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.875 23:06:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.875 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:41.454 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.712 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.971 23:06:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.229 00:15:42.229 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.229 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.229 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.488 { 00:15:42.488 "cntlid": 91, 00:15:42.488 "qid": 0, 00:15:42.488 "state": "enabled", 00:15:42.488 "listen_address": { 00:15:42.488 "trtype": "RDMA", 00:15:42.488 "adrfam": "IPv4", 00:15:42.488 "traddr": "192.168.100.8", 00:15:42.488 "trsvcid": "4420" 00:15:42.488 }, 00:15:42.488 "peer_address": { 00:15:42.488 "trtype": "RDMA", 00:15:42.488 "adrfam": "IPv4", 00:15:42.488 "traddr": "192.168.100.8", 00:15:42.488 "trsvcid": "52468" 00:15:42.488 }, 00:15:42.488 "auth": { 00:15:42.488 "state": "completed", 00:15:42.488 "digest": "sha384", 00:15:42.488 "dhgroup": "ffdhe8192" 00:15:42.488 } 00:15:42.488 } 00:15:42.488 ]' 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.488 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.747 23:06:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:43.315 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.574 23:06:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.141 00:15:44.141 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.141 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.141 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.401 { 00:15:44.401 "cntlid": 93, 00:15:44.401 "qid": 0, 00:15:44.401 "state": "enabled", 00:15:44.401 "listen_address": { 00:15:44.401 "trtype": "RDMA", 00:15:44.401 "adrfam": "IPv4", 00:15:44.401 "traddr": "192.168.100.8", 00:15:44.401 "trsvcid": "4420" 00:15:44.401 }, 00:15:44.401 "peer_address": { 00:15:44.401 "trtype": "RDMA", 00:15:44.401 "adrfam": "IPv4", 00:15:44.401 "traddr": "192.168.100.8", 00:15:44.401 "trsvcid": "45520" 00:15:44.401 }, 00:15:44.401 "auth": { 00:15:44.401 "state": "completed", 00:15:44.401 "digest": "sha384", 00:15:44.401 "dhgroup": "ffdhe8192" 00:15:44.401 } 00:15:44.401 } 00:15:44.401 ]' 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.401 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.659 23:06:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:45.226 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:45.485 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.486 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:45.486 23:06:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.486 23:06:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.486 23:06:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.486 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.486 23:06:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.053 00:15:46.053 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.053 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.053 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.311 { 00:15:46.311 "cntlid": 95, 00:15:46.311 "qid": 0, 00:15:46.311 "state": "enabled", 00:15:46.311 "listen_address": { 00:15:46.311 "trtype": "RDMA", 00:15:46.311 "adrfam": "IPv4", 00:15:46.311 "traddr": "192.168.100.8", 00:15:46.311 "trsvcid": "4420" 00:15:46.311 }, 00:15:46.311 "peer_address": { 00:15:46.311 "trtype": "RDMA", 00:15:46.311 "adrfam": "IPv4", 00:15:46.311 "traddr": "192.168.100.8", 00:15:46.311 "trsvcid": "38217" 00:15:46.311 }, 00:15:46.311 "auth": { 00:15:46.311 "state": "completed", 00:15:46.311 "digest": "sha384", 00:15:46.311 "dhgroup": "ffdhe8192" 00:15:46.311 } 00:15:46.311 } 00:15:46.311 ]' 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.311 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.312 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.312 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.312 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.570 23:06:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:47.137 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.138 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.396 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:47.396 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.397 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.655 00:15:47.655 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.656 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.656 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.914 { 00:15:47.914 "cntlid": 97, 00:15:47.914 "qid": 0, 00:15:47.914 "state": "enabled", 00:15:47.914 "listen_address": { 00:15:47.914 "trtype": "RDMA", 00:15:47.914 "adrfam": "IPv4", 00:15:47.914 "traddr": "192.168.100.8", 00:15:47.914 "trsvcid": "4420" 00:15:47.914 }, 00:15:47.914 "peer_address": { 00:15:47.914 "trtype": "RDMA", 00:15:47.914 "adrfam": "IPv4", 00:15:47.914 "traddr": "192.168.100.8", 00:15:47.914 "trsvcid": "40957" 00:15:47.914 }, 00:15:47.914 "auth": { 00:15:47.914 "state": "completed", 00:15:47.914 "digest": "sha512", 00:15:47.914 "dhgroup": "null" 00:15:47.914 } 00:15:47.914 } 00:15:47.914 ]' 00:15:47.914 23:06:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.914 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.914 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.915 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.915 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.915 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.915 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.915 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.173 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:48.741 23:06:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.000 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.259 00:15:49.259 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.259 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.259 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.518 { 00:15:49.518 "cntlid": 99, 00:15:49.518 "qid": 0, 00:15:49.518 "state": "enabled", 00:15:49.518 "listen_address": { 00:15:49.518 "trtype": "RDMA", 00:15:49.518 "adrfam": "IPv4", 00:15:49.518 "traddr": "192.168.100.8", 00:15:49.518 "trsvcid": "4420" 00:15:49.518 }, 00:15:49.518 "peer_address": { 00:15:49.518 "trtype": "RDMA", 00:15:49.518 "adrfam": "IPv4", 00:15:49.518 "traddr": "192.168.100.8", 00:15:49.518 "trsvcid": "39723" 00:15:49.518 }, 00:15:49.518 "auth": { 00:15:49.518 "state": "completed", 00:15:49.518 "digest": "sha512", 00:15:49.518 "dhgroup": "null" 00:15:49.518 } 00:15:49.518 } 00:15:49.518 ]' 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.518 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.776 23:06:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:50.343 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.602 23:06:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.860 00:15:50.860 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.860 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.860 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.119 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.119 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.119 23:06:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.119 23:06:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.119 23:06:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.119 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.119 { 00:15:51.119 "cntlid": 101, 00:15:51.119 "qid": 0, 00:15:51.120 "state": "enabled", 00:15:51.120 "listen_address": { 00:15:51.120 "trtype": "RDMA", 00:15:51.120 "adrfam": "IPv4", 00:15:51.120 "traddr": "192.168.100.8", 00:15:51.120 "trsvcid": "4420" 00:15:51.120 }, 00:15:51.120 "peer_address": { 00:15:51.120 "trtype": "RDMA", 00:15:51.120 "adrfam": "IPv4", 00:15:51.120 "traddr": "192.168.100.8", 00:15:51.120 "trsvcid": "51110" 00:15:51.120 }, 00:15:51.120 "auth": { 00:15:51.120 "state": "completed", 00:15:51.120 "digest": "sha512", 00:15:51.120 "dhgroup": "null" 00:15:51.120 } 00:15:51.120 } 00:15:51.120 ]' 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.120 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.378 23:06:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:51.945 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.204 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.205 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.463 00:15:52.463 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.463 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.463 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.722 { 00:15:52.722 "cntlid": 103, 00:15:52.722 "qid": 0, 00:15:52.722 "state": "enabled", 00:15:52.722 "listen_address": { 00:15:52.722 "trtype": "RDMA", 00:15:52.722 "adrfam": "IPv4", 00:15:52.722 "traddr": "192.168.100.8", 00:15:52.722 "trsvcid": "4420" 00:15:52.722 }, 00:15:52.722 "peer_address": { 00:15:52.722 "trtype": "RDMA", 00:15:52.722 "adrfam": "IPv4", 00:15:52.722 "traddr": "192.168.100.8", 00:15:52.722 "trsvcid": "53338" 00:15:52.722 }, 00:15:52.722 "auth": { 00:15:52.722 "state": "completed", 00:15:52.722 "digest": "sha512", 00:15:52.722 "dhgroup": "null" 00:15:52.722 } 00:15:52.722 } 00:15:52.722 ]' 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.722 23:06:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.981 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:15:53.547 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:53.806 23:06:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.806 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.065 00:15:54.065 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.065 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.065 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.324 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.324 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.324 23:06:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.324 23:06:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.324 23:06:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.324 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.324 { 00:15:54.324 "cntlid": 105, 00:15:54.324 "qid": 0, 00:15:54.324 "state": "enabled", 00:15:54.324 "listen_address": { 00:15:54.324 "trtype": "RDMA", 00:15:54.324 "adrfam": "IPv4", 00:15:54.324 "traddr": "192.168.100.8", 00:15:54.324 "trsvcid": "4420" 00:15:54.324 }, 00:15:54.324 "peer_address": { 00:15:54.324 "trtype": "RDMA", 00:15:54.324 "adrfam": "IPv4", 00:15:54.324 "traddr": "192.168.100.8", 00:15:54.324 "trsvcid": "56115" 00:15:54.324 }, 00:15:54.324 "auth": { 00:15:54.324 "state": "completed", 00:15:54.324 "digest": "sha512", 00:15:54.324 "dhgroup": "ffdhe2048" 00:15:54.325 } 00:15:54.325 } 00:15:54.325 ]' 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.325 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.584 23:06:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:15:55.151 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.410 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.669 00:15:55.669 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.669 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.669 23:06:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.929 { 00:15:55.929 "cntlid": 107, 00:15:55.929 "qid": 0, 00:15:55.929 "state": "enabled", 00:15:55.929 "listen_address": { 00:15:55.929 "trtype": "RDMA", 00:15:55.929 "adrfam": "IPv4", 00:15:55.929 "traddr": "192.168.100.8", 00:15:55.929 "trsvcid": "4420" 00:15:55.929 }, 00:15:55.929 "peer_address": { 00:15:55.929 "trtype": "RDMA", 00:15:55.929 "adrfam": "IPv4", 00:15:55.929 "traddr": "192.168.100.8", 00:15:55.929 "trsvcid": "37656" 00:15:55.929 }, 00:15:55.929 "auth": { 00:15:55.929 "state": "completed", 00:15:55.929 "digest": "sha512", 00:15:55.929 "dhgroup": "ffdhe2048" 00:15:55.929 } 00:15:55.929 } 00:15:55.929 ]' 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.929 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.188 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.188 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.188 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.188 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.188 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.188 23:06:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:15:56.783 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.041 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:57.041 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.041 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.041 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.041 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.042 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.042 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.300 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.300 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.559 { 00:15:57.559 "cntlid": 109, 00:15:57.559 "qid": 0, 00:15:57.559 "state": "enabled", 00:15:57.559 "listen_address": { 00:15:57.559 "trtype": "RDMA", 00:15:57.559 "adrfam": "IPv4", 00:15:57.559 "traddr": "192.168.100.8", 00:15:57.559 "trsvcid": "4420" 00:15:57.559 }, 00:15:57.559 "peer_address": { 00:15:57.559 "trtype": "RDMA", 00:15:57.559 "adrfam": "IPv4", 00:15:57.559 "traddr": "192.168.100.8", 00:15:57.559 "trsvcid": "34906" 00:15:57.559 }, 00:15:57.559 "auth": { 00:15:57.559 "state": "completed", 00:15:57.559 "digest": "sha512", 00:15:57.559 "dhgroup": "ffdhe2048" 00:15:57.559 } 00:15:57.559 } 00:15:57.559 ]' 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.559 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.818 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.818 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.818 23:06:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.818 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:15:58.753 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.753 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:58.753 23:06:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.753 23:06:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.753 23:06:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.753 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.754 23:06:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.011 00:15:59.011 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.011 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.011 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.268 { 00:15:59.268 "cntlid": 111, 00:15:59.268 "qid": 0, 00:15:59.268 "state": "enabled", 00:15:59.268 "listen_address": { 00:15:59.268 "trtype": "RDMA", 00:15:59.268 "adrfam": "IPv4", 00:15:59.268 "traddr": "192.168.100.8", 00:15:59.268 "trsvcid": "4420" 00:15:59.268 }, 00:15:59.268 "peer_address": { 00:15:59.268 "trtype": "RDMA", 00:15:59.268 "adrfam": "IPv4", 00:15:59.268 "traddr": "192.168.100.8", 00:15:59.268 "trsvcid": "48970" 00:15:59.268 }, 00:15:59.268 "auth": { 00:15:59.268 "state": "completed", 00:15:59.268 "digest": "sha512", 00:15:59.268 "dhgroup": "ffdhe2048" 00:15:59.268 } 00:15:59.268 } 00:15:59.268 ]' 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.268 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.526 23:06:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:16:00.093 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.352 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.610 00:16:00.611 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.611 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.611 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.869 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.869 23:06:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.869 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.869 23:06:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.869 23:06:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.869 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.869 { 00:16:00.869 "cntlid": 113, 00:16:00.869 "qid": 0, 00:16:00.869 "state": "enabled", 00:16:00.869 "listen_address": { 00:16:00.869 "trtype": "RDMA", 00:16:00.869 "adrfam": "IPv4", 00:16:00.869 "traddr": "192.168.100.8", 00:16:00.869 "trsvcid": "4420" 00:16:00.870 }, 00:16:00.870 "peer_address": { 00:16:00.870 "trtype": "RDMA", 00:16:00.870 "adrfam": "IPv4", 00:16:00.870 "traddr": "192.168.100.8", 00:16:00.870 "trsvcid": "46206" 00:16:00.870 }, 00:16:00.870 "auth": { 00:16:00.870 "state": "completed", 00:16:00.870 "digest": "sha512", 00:16:00.870 "dhgroup": "ffdhe3072" 00:16:00.870 } 00:16:00.870 } 00:16:00.870 ]' 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.870 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.128 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:16:01.695 23:06:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.954 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.212 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.212 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.212 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.212 00:16:02.212 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.212 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.212 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.472 { 00:16:02.472 "cntlid": 115, 00:16:02.472 "qid": 0, 00:16:02.472 "state": "enabled", 00:16:02.472 "listen_address": { 00:16:02.472 "trtype": "RDMA", 00:16:02.472 "adrfam": "IPv4", 00:16:02.472 "traddr": "192.168.100.8", 00:16:02.472 "trsvcid": "4420" 00:16:02.472 }, 00:16:02.472 "peer_address": { 00:16:02.472 "trtype": "RDMA", 00:16:02.472 "adrfam": "IPv4", 00:16:02.472 "traddr": "192.168.100.8", 00:16:02.472 "trsvcid": "48752" 00:16:02.472 }, 00:16:02.472 "auth": { 00:16:02.472 "state": "completed", 00:16:02.472 "digest": "sha512", 00:16:02.472 "dhgroup": "ffdhe3072" 00:16:02.472 } 00:16:02.472 } 00:16:02.472 ]' 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.472 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.731 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.731 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.731 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.731 23:06:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:16:03.299 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.558 23:06:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.816 23:06:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.816 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.816 23:06:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.816 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.075 { 00:16:04.075 "cntlid": 117, 00:16:04.075 "qid": 0, 00:16:04.075 "state": "enabled", 00:16:04.075 "listen_address": { 00:16:04.075 "trtype": "RDMA", 00:16:04.075 "adrfam": "IPv4", 00:16:04.075 "traddr": "192.168.100.8", 00:16:04.075 "trsvcid": "4420" 00:16:04.075 }, 00:16:04.075 "peer_address": { 00:16:04.075 "trtype": "RDMA", 00:16:04.075 "adrfam": "IPv4", 00:16:04.075 "traddr": "192.168.100.8", 00:16:04.075 "trsvcid": "41628" 00:16:04.075 }, 00:16:04.075 "auth": { 00:16:04.075 "state": "completed", 00:16:04.075 "digest": "sha512", 00:16:04.075 "dhgroup": "ffdhe3072" 00:16:04.075 } 00:16:04.075 } 00:16:04.075 ]' 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.075 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.335 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.335 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.335 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.335 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.335 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.335 23:06:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:16:04.903 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.162 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:05.162 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.162 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.162 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.162 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.162 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.163 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.422 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.422 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.681 { 00:16:05.681 "cntlid": 119, 00:16:05.681 "qid": 0, 00:16:05.681 "state": "enabled", 00:16:05.681 "listen_address": { 00:16:05.681 "trtype": "RDMA", 00:16:05.681 "adrfam": "IPv4", 00:16:05.681 "traddr": "192.168.100.8", 00:16:05.681 "trsvcid": "4420" 00:16:05.681 }, 00:16:05.681 "peer_address": { 00:16:05.681 "trtype": "RDMA", 00:16:05.681 "adrfam": "IPv4", 00:16:05.681 "traddr": "192.168.100.8", 00:16:05.681 "trsvcid": "34607" 00:16:05.681 }, 00:16:05.681 "auth": { 00:16:05.681 "state": "completed", 00:16:05.681 "digest": "sha512", 00:16:05.681 "dhgroup": "ffdhe3072" 00:16:05.681 } 00:16:05.681 } 00:16:05.681 ]' 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.681 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.940 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.940 23:06:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.940 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.940 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.940 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.940 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:16:06.507 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.765 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:06.766 23:06:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.024 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.283 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.283 { 00:16:07.283 "cntlid": 121, 00:16:07.283 "qid": 0, 00:16:07.283 "state": "enabled", 00:16:07.283 "listen_address": { 00:16:07.283 "trtype": "RDMA", 00:16:07.283 "adrfam": "IPv4", 00:16:07.283 "traddr": "192.168.100.8", 00:16:07.283 "trsvcid": "4420" 00:16:07.283 }, 00:16:07.283 "peer_address": { 00:16:07.283 "trtype": "RDMA", 00:16:07.283 "adrfam": "IPv4", 00:16:07.283 "traddr": "192.168.100.8", 00:16:07.283 "trsvcid": "42393" 00:16:07.283 }, 00:16:07.283 "auth": { 00:16:07.283 "state": "completed", 00:16:07.283 "digest": "sha512", 00:16:07.283 "dhgroup": "ffdhe4096" 00:16:07.283 } 00:16:07.283 } 00:16:07.283 ]' 00:16:07.283 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.542 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.801 23:06:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:08.368 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.627 23:07:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.886 00:16:08.886 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.886 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.886 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.145 { 00:16:09.145 "cntlid": 123, 00:16:09.145 "qid": 0, 00:16:09.145 "state": "enabled", 00:16:09.145 "listen_address": { 00:16:09.145 "trtype": "RDMA", 00:16:09.145 "adrfam": "IPv4", 00:16:09.145 "traddr": "192.168.100.8", 00:16:09.145 "trsvcid": "4420" 00:16:09.145 }, 00:16:09.145 "peer_address": { 00:16:09.145 "trtype": "RDMA", 00:16:09.145 "adrfam": "IPv4", 00:16:09.145 "traddr": "192.168.100.8", 00:16:09.145 "trsvcid": "36366" 00:16:09.145 }, 00:16:09.145 "auth": { 00:16:09.145 "state": "completed", 00:16:09.145 "digest": "sha512", 00:16:09.145 "dhgroup": "ffdhe4096" 00:16:09.145 } 00:16:09.145 } 00:16:09.145 ]' 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.145 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.404 23:07:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:16:09.971 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.229 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.488 00:16:10.488 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.488 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.488 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.746 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.746 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.746 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.746 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.746 23:07:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.747 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.747 { 00:16:10.747 "cntlid": 125, 00:16:10.747 "qid": 0, 00:16:10.747 "state": "enabled", 00:16:10.747 "listen_address": { 00:16:10.747 "trtype": "RDMA", 00:16:10.747 "adrfam": "IPv4", 00:16:10.747 "traddr": "192.168.100.8", 00:16:10.747 "trsvcid": "4420" 00:16:10.747 }, 00:16:10.747 "peer_address": { 00:16:10.747 "trtype": "RDMA", 00:16:10.747 "adrfam": "IPv4", 00:16:10.747 "traddr": "192.168.100.8", 00:16:10.747 "trsvcid": "45175" 00:16:10.747 }, 00:16:10.747 "auth": { 00:16:10.747 "state": "completed", 00:16:10.747 "digest": "sha512", 00:16:10.747 "dhgroup": "ffdhe4096" 00:16:10.747 } 00:16:10.747 } 00:16:10.747 ]' 00:16:10.747 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.747 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.747 23:07:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.747 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.004 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.004 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.004 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.004 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.004 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:16:11.569 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:11.829 23:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.088 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.346 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.347 { 00:16:12.347 "cntlid": 127, 00:16:12.347 "qid": 0, 00:16:12.347 "state": "enabled", 00:16:12.347 "listen_address": { 00:16:12.347 "trtype": "RDMA", 00:16:12.347 "adrfam": "IPv4", 00:16:12.347 "traddr": "192.168.100.8", 00:16:12.347 "trsvcid": "4420" 00:16:12.347 }, 00:16:12.347 "peer_address": { 00:16:12.347 "trtype": "RDMA", 00:16:12.347 "adrfam": "IPv4", 00:16:12.347 "traddr": "192.168.100.8", 00:16:12.347 "trsvcid": "37974" 00:16:12.347 }, 00:16:12.347 "auth": { 00:16:12.347 "state": "completed", 00:16:12.347 "digest": "sha512", 00:16:12.347 "dhgroup": "ffdhe4096" 00:16:12.347 } 00:16:12.347 } 00:16:12.347 ]' 00:16:12.347 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.605 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.606 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.606 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.606 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.606 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.606 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.606 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.864 23:07:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:16:13.431 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.431 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:13.431 23:07:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.431 23:07:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.431 23:07:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.432 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.432 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.432 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.432 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.701 23:07:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.015 00:16:14.015 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.015 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.015 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.273 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.273 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.273 23:07:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:14.273 23:07:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.273 23:07:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:14.273 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.273 { 00:16:14.273 "cntlid": 129, 00:16:14.273 "qid": 0, 00:16:14.274 "state": "enabled", 00:16:14.274 "listen_address": { 00:16:14.274 "trtype": "RDMA", 00:16:14.274 "adrfam": "IPv4", 00:16:14.274 "traddr": "192.168.100.8", 00:16:14.274 "trsvcid": "4420" 00:16:14.274 }, 00:16:14.274 "peer_address": { 00:16:14.274 "trtype": "RDMA", 00:16:14.274 "adrfam": "IPv4", 00:16:14.274 "traddr": "192.168.100.8", 00:16:14.274 "trsvcid": "34490" 00:16:14.274 }, 00:16:14.274 "auth": { 00:16:14.274 "state": "completed", 00:16:14.274 "digest": "sha512", 00:16:14.274 "dhgroup": "ffdhe6144" 00:16:14.274 } 00:16:14.274 } 00:16:14.274 ]' 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.274 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.533 23:07:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:16:15.101 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.101 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:15.101 23:07:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.101 23:07:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.360 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.619 00:16:15.878 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.878 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.878 23:07:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.878 { 00:16:15.878 "cntlid": 131, 00:16:15.878 "qid": 0, 00:16:15.878 "state": "enabled", 00:16:15.878 "listen_address": { 00:16:15.878 "trtype": "RDMA", 00:16:15.878 "adrfam": "IPv4", 00:16:15.878 "traddr": "192.168.100.8", 00:16:15.878 "trsvcid": "4420" 00:16:15.878 }, 00:16:15.878 "peer_address": { 00:16:15.878 "trtype": "RDMA", 00:16:15.878 "adrfam": "IPv4", 00:16:15.878 "traddr": "192.168.100.8", 00:16:15.878 "trsvcid": "55155" 00:16:15.878 }, 00:16:15.878 "auth": { 00:16:15.878 "state": "completed", 00:16:15.878 "digest": "sha512", 00:16:15.878 "dhgroup": "ffdhe6144" 00:16:15.878 } 00:16:15.878 } 00:16:15.878 ]' 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.878 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.137 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.137 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.137 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.137 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.137 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.137 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:16:17.073 23:07:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.073 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.640 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.640 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.640 { 00:16:17.640 "cntlid": 133, 00:16:17.640 "qid": 0, 00:16:17.640 "state": "enabled", 00:16:17.640 "listen_address": { 00:16:17.640 "trtype": "RDMA", 00:16:17.640 "adrfam": "IPv4", 00:16:17.640 "traddr": "192.168.100.8", 00:16:17.640 "trsvcid": "4420" 00:16:17.640 }, 00:16:17.640 "peer_address": { 00:16:17.640 "trtype": "RDMA", 00:16:17.640 "adrfam": "IPv4", 00:16:17.640 "traddr": "192.168.100.8", 00:16:17.640 "trsvcid": "58272" 00:16:17.640 }, 00:16:17.640 "auth": { 00:16:17.640 "state": "completed", 00:16:17.640 "digest": "sha512", 00:16:17.640 "dhgroup": "ffdhe6144" 00:16:17.640 } 00:16:17.640 } 00:16:17.640 ]' 00:16:17.641 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.641 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.641 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.641 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.641 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.899 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.899 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.899 23:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.899 23:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.835 23:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.836 23:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.836 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.405 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.405 { 00:16:19.405 "cntlid": 135, 00:16:19.405 "qid": 0, 00:16:19.405 "state": "enabled", 00:16:19.405 "listen_address": { 00:16:19.405 "trtype": "RDMA", 00:16:19.405 "adrfam": "IPv4", 00:16:19.405 "traddr": "192.168.100.8", 00:16:19.405 "trsvcid": "4420" 00:16:19.405 }, 00:16:19.405 "peer_address": { 00:16:19.405 "trtype": "RDMA", 00:16:19.405 "adrfam": "IPv4", 00:16:19.405 "traddr": "192.168.100.8", 00:16:19.405 "trsvcid": "37749" 00:16:19.405 }, 00:16:19.405 "auth": { 00:16:19.405 "state": "completed", 00:16:19.405 "digest": "sha512", 00:16:19.405 "dhgroup": "ffdhe6144" 00:16:19.405 } 00:16:19.405 } 00:16:19.405 ]' 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.405 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.663 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.663 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.663 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.663 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.663 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.663 23:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:16:20.230 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.489 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.747 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:20.747 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.747 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.747 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.748 23:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.004 00:16:21.004 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.004 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.004 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.262 { 00:16:21.262 "cntlid": 137, 00:16:21.262 "qid": 0, 00:16:21.262 "state": "enabled", 00:16:21.262 "listen_address": { 00:16:21.262 "trtype": "RDMA", 00:16:21.262 "adrfam": "IPv4", 00:16:21.262 "traddr": "192.168.100.8", 00:16:21.262 "trsvcid": "4420" 00:16:21.262 }, 00:16:21.262 "peer_address": { 00:16:21.262 "trtype": "RDMA", 00:16:21.262 "adrfam": "IPv4", 00:16:21.262 "traddr": "192.168.100.8", 00:16:21.262 "trsvcid": "38886" 00:16:21.262 }, 00:16:21.262 "auth": { 00:16:21.262 "state": "completed", 00:16:21.262 "digest": "sha512", 00:16:21.262 "dhgroup": "ffdhe8192" 00:16:21.262 } 00:16:21.262 } 00:16:21.262 ]' 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.262 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.521 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.521 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.521 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.521 23:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:16:22.087 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.346 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.604 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.605 23:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.863 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.121 { 00:16:23.121 "cntlid": 139, 00:16:23.121 "qid": 0, 00:16:23.121 "state": "enabled", 00:16:23.121 "listen_address": { 00:16:23.121 "trtype": "RDMA", 00:16:23.121 "adrfam": "IPv4", 00:16:23.121 "traddr": "192.168.100.8", 00:16:23.121 "trsvcid": "4420" 00:16:23.121 }, 00:16:23.121 "peer_address": { 00:16:23.121 "trtype": "RDMA", 00:16:23.121 "adrfam": "IPv4", 00:16:23.121 "traddr": "192.168.100.8", 00:16:23.121 "trsvcid": "60674" 00:16:23.121 }, 00:16:23.121 "auth": { 00:16:23.121 "state": "completed", 00:16:23.121 "digest": "sha512", 00:16:23.121 "dhgroup": "ffdhe8192" 00:16:23.121 } 00:16:23.121 } 00:16:23.121 ]' 00:16:23.121 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.122 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.122 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.380 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.380 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.380 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.380 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.380 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.380 23:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDczMmYwZWQ5NDExOTdiY2YzMDA5NGE3NTA3NjYyZmNqE49N: --dhchap-ctrl-secret DHHC-1:02:NDBkZjAyMGRkNjZkMzIxMTU0NDRjYjJiYzNkYWMzMGM0ZjdhNTZhZGY0OTEwN2ZmeR5zLA==: 00:16:23.947 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.206 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.465 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.724 00:16:24.724 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.982 23:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.982 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.982 { 00:16:24.982 "cntlid": 141, 00:16:24.982 "qid": 0, 00:16:24.982 "state": "enabled", 00:16:24.982 "listen_address": { 00:16:24.982 "trtype": "RDMA", 00:16:24.982 "adrfam": "IPv4", 00:16:24.982 "traddr": "192.168.100.8", 00:16:24.982 "trsvcid": "4420" 00:16:24.982 }, 00:16:24.982 "peer_address": { 00:16:24.982 "trtype": "RDMA", 00:16:24.982 "adrfam": "IPv4", 00:16:24.982 "traddr": "192.168.100.8", 00:16:24.982 "trsvcid": "60117" 00:16:24.982 }, 00:16:24.982 "auth": { 00:16:24.982 "state": "completed", 00:16:24.982 "digest": "sha512", 00:16:24.982 "dhgroup": "ffdhe8192" 00:16:24.982 } 00:16:24.982 } 00:16:24.983 ]' 00:16:24.983 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.983 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.983 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.983 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.983 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.241 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.241 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.241 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.241 23:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:M2YyOTUxYTBmOTMwYzcxYmZlYWFhMmM3MmY5NzhmMTMzOTZiYzI5ZWQ5ZWZhZjg4pq2zpQ==: --dhchap-ctrl-secret DHHC-1:01:YTRhN2MzMjgyYTBhNzkxZjZhZGVlYTZlNWNlNWU5OTXUq8T7: 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.176 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.177 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.744 00:16:26.744 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.744 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.744 23:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.002 { 00:16:27.002 "cntlid": 143, 00:16:27.002 "qid": 0, 00:16:27.002 "state": "enabled", 00:16:27.002 "listen_address": { 00:16:27.002 "trtype": "RDMA", 00:16:27.002 "adrfam": "IPv4", 00:16:27.002 "traddr": "192.168.100.8", 00:16:27.002 "trsvcid": "4420" 00:16:27.002 }, 00:16:27.002 "peer_address": { 00:16:27.002 "trtype": "RDMA", 00:16:27.002 "adrfam": "IPv4", 00:16:27.002 "traddr": "192.168.100.8", 00:16:27.002 "trsvcid": "57367" 00:16:27.002 }, 00:16:27.002 "auth": { 00:16:27.002 "state": "completed", 00:16:27.002 "digest": "sha512", 00:16:27.002 "dhgroup": "ffdhe8192" 00:16:27.002 } 00:16:27.002 } 00:16:27.002 ]' 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.002 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.003 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.003 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.003 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.003 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.261 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:16:27.828 23:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.828 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:27.828 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.828 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.086 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.654 00:16:28.654 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.654 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.654 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.912 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.912 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.912 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.912 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.912 23:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.912 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.912 { 00:16:28.912 "cntlid": 145, 00:16:28.912 "qid": 0, 00:16:28.912 "state": "enabled", 00:16:28.912 "listen_address": { 00:16:28.912 "trtype": "RDMA", 00:16:28.912 "adrfam": "IPv4", 00:16:28.912 "traddr": "192.168.100.8", 00:16:28.912 "trsvcid": "4420" 00:16:28.912 }, 00:16:28.912 "peer_address": { 00:16:28.912 "trtype": "RDMA", 00:16:28.912 "adrfam": "IPv4", 00:16:28.913 "traddr": "192.168.100.8", 00:16:28.913 "trsvcid": "32991" 00:16:28.913 }, 00:16:28.913 "auth": { 00:16:28.913 "state": "completed", 00:16:28.913 "digest": "sha512", 00:16:28.913 "dhgroup": "ffdhe8192" 00:16:28.913 } 00:16:28.913 } 00:16:28.913 ]' 00:16:28.913 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.913 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.913 23:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.913 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.913 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.913 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.913 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.913 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.171 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQyMTRkZDc5ODg3MjhlZjIxYTc4ODczOTQxZmQ1YTI0MjU5ZDdjNDE3ZWI1ZjFhXEVLUA==: --dhchap-ctrl-secret DHHC-1:03:ZDg0M2RiNGViNzcxZjgzZmI1MDM3MjNmMjI3NDcxOWM4YjBhMDIyN2RiN2I0N2VmYzk4NTBkNWQ3ODI2MzI0OYdmNN4=: 00:16:29.739 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.739 23:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:29.739 23:07:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.739 23:07:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.739 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.739 23:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:16:29.739 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.739 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:29.997 23:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.139 request: 00:17:02.139 { 00:17:02.139 "name": "nvme0", 00:17:02.139 "trtype": "rdma", 00:17:02.139 "traddr": "192.168.100.8", 00:17:02.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:02.139 "adrfam": "ipv4", 00:17:02.139 "trsvcid": "4420", 00:17:02.139 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.139 "dhchap_key": "key2", 00:17:02.139 "method": "bdev_nvme_attach_controller", 00:17:02.139 "req_id": 1 00:17:02.139 } 00:17:02.139 Got JSON-RPC error response 00:17:02.139 response: 00:17:02.139 { 00:17:02.139 "code": -5, 00:17:02.139 "message": "Input/output error" 00:17:02.139 } 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 23:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.139 request: 00:17:02.139 { 00:17:02.139 "name": "nvme0", 00:17:02.139 "trtype": "rdma", 00:17:02.139 "traddr": "192.168.100.8", 00:17:02.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:02.139 "adrfam": "ipv4", 00:17:02.139 "trsvcid": "4420", 00:17:02.139 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.139 "dhchap_key": "key1", 00:17:02.139 "dhchap_ctrlr_key": "ckey2", 00:17:02.139 "method": "bdev_nvme_attach_controller", 00:17:02.139 "req_id": 1 00:17:02.139 } 00:17:02.139 Got JSON-RPC error response 00:17:02.139 response: 00:17:02.139 { 00:17:02.139 "code": -5, 00:17:02.139 "message": "Input/output error" 00:17:02.139 } 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.139 23:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.219 request: 00:17:34.219 { 00:17:34.219 "name": "nvme0", 00:17:34.219 "trtype": "rdma", 00:17:34.219 "traddr": "192.168.100.8", 00:17:34.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:34.219 "adrfam": "ipv4", 00:17:34.219 "trsvcid": "4420", 00:17:34.219 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.219 "dhchap_key": "key1", 00:17:34.219 "dhchap_ctrlr_key": "ckey1", 00:17:34.219 "method": "bdev_nvme_attach_controller", 00:17:34.219 "req_id": 1 00:17:34.219 } 00:17:34.219 Got JSON-RPC error response 00:17:34.219 response: 00:17:34.219 { 00:17:34.219 "code": -5, 00:17:34.219 "message": "Input/output error" 00:17:34.219 } 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 904589 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 904589 ']' 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 904589 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 904589 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 904589' 00:17:34.219 killing process with pid 904589 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 904589 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 904589 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=937363 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 937363 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 937363 ']' 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:34.219 23:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 937363 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 937363 ']' 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.219 23:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.219 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.220 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.220 { 00:17:34.220 "cntlid": 1, 00:17:34.220 "qid": 0, 00:17:34.220 "state": "enabled", 00:17:34.220 "listen_address": { 00:17:34.220 "trtype": "RDMA", 00:17:34.220 "adrfam": "IPv4", 00:17:34.220 "traddr": "192.168.100.8", 00:17:34.220 "trsvcid": "4420" 00:17:34.220 }, 00:17:34.220 "peer_address": { 00:17:34.220 "trtype": "RDMA", 00:17:34.220 "adrfam": "IPv4", 00:17:34.220 "traddr": "192.168.100.8", 00:17:34.220 "trsvcid": "38406" 00:17:34.220 }, 00:17:34.220 "auth": { 00:17:34.220 "state": "completed", 00:17:34.220 "digest": "sha512", 00:17:34.220 "dhgroup": "ffdhe8192" 00:17:34.220 } 00:17:34.220 } 00:17:34.220 ]' 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.220 23:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.220 23:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MjJlY2NjZjU5NWZkM2FiOTBiNjJhODNiYTE1NWM0NzNlNDdlZmJiZWM0YTdmMmI5NGIzYWJjNGYyMTcxMjFmM+nG/kE=: 00:17:34.479 23:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.737 23:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:34.737 23:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.737 23:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.737 23:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.737 23:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:34.737 23:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.738 23:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.738 23:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.738 23:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:34.738 23:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.738 23:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.828 request: 00:18:06.828 { 00:18:06.829 "name": "nvme0", 00:18:06.829 "trtype": "rdma", 00:18:06.829 "traddr": "192.168.100.8", 00:18:06.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:06.829 "adrfam": "ipv4", 00:18:06.829 "trsvcid": "4420", 00:18:06.829 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.829 "dhchap_key": "key3", 00:18:06.829 "method": "bdev_nvme_attach_controller", 00:18:06.829 "req_id": 1 00:18:06.829 } 00:18:06.829 Got JSON-RPC error response 00:18:06.829 response: 00:18:06.829 { 00:18:06.829 "code": -5, 00:18:06.829 "message": "Input/output error" 00:18:06.829 } 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.829 23:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.908 request: 00:18:38.908 { 00:18:38.908 "name": "nvme0", 00:18:38.908 "trtype": "rdma", 00:18:38.908 "traddr": "192.168.100.8", 00:18:38.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:38.908 "adrfam": "ipv4", 00:18:38.908 "trsvcid": "4420", 00:18:38.908 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.908 "dhchap_key": "key3", 00:18:38.908 "method": "bdev_nvme_attach_controller", 00:18:38.908 "req_id": 1 00:18:38.908 } 00:18:38.908 Got JSON-RPC error response 00:18:38.908 response: 00:18:38.908 { 00:18:38.908 "code": -5, 00:18:38.908 "message": "Input/output error" 00:18:38.908 } 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.908 23:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.908 request: 00:18:38.908 { 00:18:38.908 "name": "nvme0", 00:18:38.908 "trtype": "rdma", 00:18:38.908 "traddr": "192.168.100.8", 00:18:38.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:38.908 "adrfam": "ipv4", 00:18:38.908 "trsvcid": "4420", 00:18:38.908 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.908 "dhchap_key": "key0", 00:18:38.908 "dhchap_ctrlr_key": "key1", 00:18:38.908 "method": "bdev_nvme_attach_controller", 00:18:38.908 "req_id": 1 00:18:38.908 } 00:18:38.908 Got JSON-RPC error response 00:18:38.908 response: 00:18:38.908 { 00:18:38.908 "code": -5, 00:18:38.908 "message": "Input/output error" 00:18:38.908 } 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.908 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 904837 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 904837 ']' 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 904837 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:18:38.908 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:38.909 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 904837 00:18:38.909 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:38.909 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:38.909 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 904837' 00:18:38.909 killing process with pid 904837 00:18:38.909 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 904837 00:18:38.909 23:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 904837 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:38.909 rmmod nvme_rdma 00:18:38.909 rmmod nvme_fabrics 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 937363 ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 937363 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 937363 ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 937363 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 937363 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 937363' 00:18:38.909 killing process with pid 937363 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 937363 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 937363 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aP2 /tmp/spdk.key-sha256.hXw /tmp/spdk.key-sha384.pyO /tmp/spdk.key-sha512.4ut /tmp/spdk.key-sha512.VvF /tmp/spdk.key-sha384.jbz /tmp/spdk.key-sha256.J0x '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:18:38.909 00:18:38.909 real 4m20.937s 00:18:38.909 user 9m22.745s 00:18:38.909 sys 0m18.857s 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:38.909 23:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.909 ************************************ 00:18:38.909 END TEST nvmf_auth_target 00:18:38.909 ************************************ 00:18:38.909 23:09:29 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:18:38.909 23:09:29 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:38.909 23:09:29 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:38.909 23:09:29 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:18:38.909 23:09:29 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:18:38.909 23:09:29 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:18:38.909 23:09:29 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:38.909 23:09:29 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:38.909 23:09:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:38.909 ************************************ 00:18:38.909 START TEST nvmf_device_removal 00:18:38.909 ************************************ 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1124 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:18:38.909 * Looking for test storage... 00:18:38.909 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:38.909 #define SPDK_CONFIG_H 00:18:38.909 #define SPDK_CONFIG_APPS 1 00:18:38.909 #define SPDK_CONFIG_ARCH native 00:18:38.909 #undef SPDK_CONFIG_ASAN 00:18:38.909 #undef SPDK_CONFIG_AVAHI 00:18:38.909 #undef SPDK_CONFIG_CET 00:18:38.909 #define SPDK_CONFIG_COVERAGE 1 00:18:38.909 #define SPDK_CONFIG_CROSS_PREFIX 00:18:38.909 #undef SPDK_CONFIG_CRYPTO 00:18:38.909 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:38.909 #undef SPDK_CONFIG_CUSTOMOCF 00:18:38.909 #undef SPDK_CONFIG_DAOS 00:18:38.909 #define SPDK_CONFIG_DAOS_DIR 00:18:38.909 #define SPDK_CONFIG_DEBUG 1 00:18:38.909 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:38.909 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:18:38.909 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:38.909 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:38.909 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:38.909 #undef SPDK_CONFIG_DPDK_UADK 00:18:38.909 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:18:38.909 #define SPDK_CONFIG_EXAMPLES 1 00:18:38.909 #undef SPDK_CONFIG_FC 00:18:38.909 #define SPDK_CONFIG_FC_PATH 00:18:38.909 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:38.909 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:38.909 #undef SPDK_CONFIG_FUSE 00:18:38.909 #undef SPDK_CONFIG_FUZZER 00:18:38.909 #define SPDK_CONFIG_FUZZER_LIB 00:18:38.909 #undef SPDK_CONFIG_GOLANG 00:18:38.909 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:38.909 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:38.909 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:38.909 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:38.909 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:38.909 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:38.909 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:38.909 #define SPDK_CONFIG_IDXD 1 00:18:38.909 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:38.909 #undef SPDK_CONFIG_IPSEC_MB 00:18:38.909 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:38.909 #define SPDK_CONFIG_ISAL 1 00:18:38.909 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:38.909 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:38.909 #define SPDK_CONFIG_LIBDIR 00:18:38.909 #undef SPDK_CONFIG_LTO 00:18:38.909 #define SPDK_CONFIG_MAX_LCORES 00:18:38.909 #define SPDK_CONFIG_NVME_CUSE 1 00:18:38.909 #undef SPDK_CONFIG_OCF 00:18:38.909 #define SPDK_CONFIG_OCF_PATH 00:18:38.909 #define SPDK_CONFIG_OPENSSL_PATH 00:18:38.909 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:38.909 #define SPDK_CONFIG_PGO_DIR 00:18:38.909 #undef SPDK_CONFIG_PGO_USE 00:18:38.909 #define SPDK_CONFIG_PREFIX /usr/local 00:18:38.909 #undef SPDK_CONFIG_RAID5F 00:18:38.909 #undef SPDK_CONFIG_RBD 00:18:38.909 #define SPDK_CONFIG_RDMA 1 00:18:38.909 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:38.909 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:38.909 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:38.909 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:38.909 #define SPDK_CONFIG_SHARED 1 00:18:38.909 #undef SPDK_CONFIG_SMA 00:18:38.909 #define SPDK_CONFIG_TESTS 1 00:18:38.909 #undef SPDK_CONFIG_TSAN 00:18:38.909 #define SPDK_CONFIG_UBLK 1 00:18:38.909 #define SPDK_CONFIG_UBSAN 1 00:18:38.909 #undef SPDK_CONFIG_UNIT_TESTS 00:18:38.909 #undef SPDK_CONFIG_URING 00:18:38.909 #define SPDK_CONFIG_URING_PATH 00:18:38.909 #undef SPDK_CONFIG_URING_ZNS 00:18:38.909 #undef SPDK_CONFIG_USDT 00:18:38.909 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:38.909 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:38.909 #undef SPDK_CONFIG_VFIO_USER 00:18:38.909 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:38.909 #define SPDK_CONFIG_VHOST 1 00:18:38.909 #define SPDK_CONFIG_VIRTIO 1 00:18:38.909 #undef SPDK_CONFIG_VTUNE 00:18:38.909 #define SPDK_CONFIG_VTUNE_DIR 00:18:38.909 #define SPDK_CONFIG_WERROR 1 00:18:38.909 #define SPDK_CONFIG_WPDK_DIR 00:18:38.909 #undef SPDK_CONFIG_XNVME 00:18:38.909 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:38.909 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # : 1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # : 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # : 1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # : 1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # : rdma 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # : 1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # : 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # : 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # : true 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # : mlx5 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # : 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # : 0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@200 -- # cat 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:38.910 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # export valgrind= 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # valgrind= 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # uname -s 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKE=make 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # TEST_MODE= 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # for i in "$@" 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@301 -- # case "$i" in 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # [[ -z 948719 ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # kill -0 948719 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@331 -- # local mount target_dir 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.1TvbYM 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1TvbYM/tests/target /tmp/spdk.1TvbYM 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # df -T 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=185246400512 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974316032 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=10727915520 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=97924771840 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=62386176 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=39171633152 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=23232512 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=97985486848 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=1671168 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:18:38.911 * Looking for test storage... 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # local target_space new_size 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # mount=/ 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # target_space=185246400512 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # new_size=12942508032 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@389 -- # return 0 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1681 -- # set -o errtrace 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1686 -- # true 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1688 -- # xtrace_fd 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.911 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.912 23:09:29 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:44.175 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:44.175 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:44.175 Found net devices under 0000:da:00.0: mlx_0_0 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:44.175 Found net devices under 0000:da:00.1: mlx_0_1 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:44.175 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:44.176 222: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.176 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:44.176 altname enp218s0f0np0 00:18:44.176 altname ens818f0np0 00:18:44.176 inet 192.168.100.8/24 scope global mlx_0_0 00:18:44.176 valid_lft forever preferred_lft forever 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:44.176 223: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.176 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:44.176 altname enp218s0f1np1 00:18:44.176 altname ens818f1np1 00:18:44.176 inet 192.168.100.9/24 scope global mlx_0_1 00:18:44.176 valid_lft forever preferred_lft forever 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:44.176 192.168.100.9' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:44.176 192.168.100.9' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:44.176 192.168.100.9' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:44.176 ************************************ 00:18:44.176 START TEST nvmf_device_removal_pci_remove_no_srq 00:18:44.176 ************************************ 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1124 -- # test_remove_and_rescan --no-srq 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=952060 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 952060 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 952060 ']' 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:44.176 23:09:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.176 [2024-06-07 23:09:35.799856] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:18:44.176 [2024-06-07 23:09:35.799903] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.176 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.176 [2024-06-07 23:09:35.862555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.176 [2024-06-07 23:09:35.943948] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.176 [2024-06-07 23:09:35.943982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.176 [2024-06-07 23:09:35.943989] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.177 [2024-06-07 23:09:35.943995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.177 [2024-06-07 23:09:35.944000] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.177 [2024-06-07 23:09:35.944046] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.177 [2024-06-07 23:09:35.944049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:18:44.435 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.436 [2024-06-07 23:09:36.667122] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e0c360/0x1e10850) succeed. 00:18:44.436 [2024-06-07 23:09:36.675874] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e0d860/0x1e51ee0) succeed. 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:18:44.436 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:44.695 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:18:44.695 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:18:44.695 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:18:44.695 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:18:44.695 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 [2024-06-07 23:09:36.793334] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:44.696 [2024-06-07 23:09:36.868048] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=952322 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 952322 /var/tmp/bdevperf.sock 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 952322 ']' 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:44.696 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.697 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:44.697 23:09:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:45.633 Nvme_mlx_0_0n1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.633 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:45.892 Nvme_mlx_0_1n1 00:18:45.892 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.892 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=952558 00:18:45.892 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:18:45.892 23:09:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/infiniband 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:51.164 23:09:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.164 mlx5_0 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:18:51.164 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:51.165 23:09:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:18:51.165 [2024-06-07 23:09:43.060359] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:18:51.165 [2024-06-07 23:09:43.060443] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:51.165 [2024-06-07 23:09:43.060534] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:51.165 [2024-06-07 23:09:43.060545] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:18:51.165 [2024-06-07 23:09:43.060551] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:18:51.165 [2024-06-07 23:09:43.060557] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060563] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060572] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060577] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060582] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060587] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060592] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060597] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060602] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060607] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060611] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060616] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060621] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060626] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060630] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060635] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060640] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060645] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060649] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060654] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060659] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060663] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060668] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060676] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060681] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060685] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060690] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060695] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060700] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060705] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060710] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060715] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060719] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060724] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060729] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060734] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060738] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060743] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060747] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060752] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060756] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060761] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060765] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060770] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060775] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060780] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060784] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060789] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060794] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060798] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060802] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060807] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060812] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060816] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060821] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060825] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060830] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060835] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060839] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060844] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060850] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060854] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060859] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060864] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060868] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060874] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060879] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060884] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060889] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060893] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060898] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060903] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060907] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060912] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060917] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060921] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060926] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060930] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060935] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060939] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060945] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060950] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060955] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060960] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.060964] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.060969] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060973] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060978] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060983] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060987] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.060992] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.060996] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.061001] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.061006] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.061017] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.061022] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.061027] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.061032] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.061037] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.061041] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.061046] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.061050] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.165 [2024-06-07 23:09:43.061055] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.165 [2024-06-07 23:09:43.061060] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.165 [2024-06-07 23:09:43.061065] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.165 [2024-06-07 23:09:43.061070] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061075] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061081] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061086] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061091] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061096] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061103] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061108] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061112] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061117] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061122] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061126] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061131] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061136] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061140] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061145] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061150] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061154] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061159] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061164] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061169] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061174] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061179] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061183] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061188] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061193] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061197] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061202] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061207] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061211] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061216] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061221] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061226] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061230] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061235] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061240] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061244] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061249] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061253] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061258] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061263] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061267] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061272] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061277] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061283] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061287] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061292] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061297] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061301] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061306] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061311] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061316] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061320] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061325] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061330] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061334] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061339] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061344] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061348] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061353] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061357] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061362] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061367] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061372] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061377] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061381] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061386] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061391] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061395] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061400] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061405] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061409] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061414] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061418] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061423] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061428] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061433] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061437] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061442] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061446] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061451] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061456] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061461] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061474] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061479] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061488] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061492] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061497] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061502] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061507] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061511] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061516] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061520] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061525] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061530] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061535] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061539] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061544] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061549] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061554] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061559] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061563] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061573] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061578] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061582] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061587] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061592] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061596] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061601] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061606] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061610] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061615] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061620] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.166 [2024-06-07 23:09:43.061625] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.166 [2024-06-07 23:09:43.061632] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.166 [2024-06-07 23:09:43.061637] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.166 [2024-06-07 23:09:43.061642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061646] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061651] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061656] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061661] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061665] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061670] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061674] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061679] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061684] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061690] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061695] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061699] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061704] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061709] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061714] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.167 [2024-06-07 23:09:43.061719] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.167 [2024-06-07 23:09:43.061723] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061728] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061733] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:51.167 [2024-06-07 23:09:43.061738] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:51.167 [2024-06-07 23:09:43.061742] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061747] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061752] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061756] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061761] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061765] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061770] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061775] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:51.167 [2024-06-07 23:09:43.061780] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:51.167 [2024-06-07 23:09:43.061786] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:18:57.729 23:09:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:18:57.729 [2024-06-07 23:09:49.709435] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x2049ac0, err 11. Skip rescan. 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/net 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:18:57.729 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:18:57.730 23:09:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:18:57.987 [2024-06-07 23:09:50.054057] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x204af70/0x1e10850) succeed. 00:18:57.987 [2024-06-07 23:09:50.054123] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:19:01.272 [2024-06-07 23:09:53.107497] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:01.272 [2024-06-07 23:09:53.107533] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:19:01.272 [2024-06-07 23:09:53.107549] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:19:01.272 [2024-06-07 23:09:53.107562] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:19:01.272 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/infiniband 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.273 mlx5_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:19:01.273 23:09:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:19:01.273 [2024-06-07 23:09:53.261523] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:19:01.273 [2024-06-07 23:09:53.261601] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:01.273 [2024-06-07 23:09:53.271042] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:01.273 [2024-06-07 23:09:53.271057] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 93 00:19:01.273 [2024-06-07 23:09:53.271063] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:19:01.273 [2024-06-07 23:09:53.271069] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271074] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271079] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271085] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271089] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271094] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271099] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271104] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271108] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271113] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271118] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271123] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271127] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271136] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271141] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271147] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271151] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271156] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271161] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271165] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271170] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271174] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271179] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271184] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271188] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271193] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271198] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271203] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271207] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271212] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271217] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271223] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271227] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271232] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271236] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271241] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271246] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271251] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271255] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271260] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271265] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271270] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271274] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271279] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271284] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271288] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271293] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271297] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271302] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271307] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271312] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271317] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271322] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271326] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271331] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271337] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271342] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271347] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271351] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271356] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271360] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271365] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271370] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.273 [2024-06-07 23:09:53.271375] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.273 [2024-06-07 23:09:53.271380] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271385] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271389] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271394] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271398] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.273 [2024-06-07 23:09:53.271403] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.273 [2024-06-07 23:09:53.271407] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271412] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271417] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271421] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271426] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271431] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271436] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271441] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271446] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271450] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271455] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271459] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271464] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271469] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271473] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271479] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271486] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271493] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271499] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271505] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271511] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271515] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271520] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271525] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271529] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271534] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271539] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271545] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271550] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271554] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271559] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271564] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271581] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271588] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271596] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271603] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271609] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271615] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271621] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271628] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271634] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271640] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271647] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271653] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271659] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271665] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271671] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271678] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271684] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271691] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271697] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271704] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271709] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271716] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271722] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271728] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271735] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271742] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271749] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271755] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271761] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271767] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271774] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271780] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271786] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271792] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271799] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271805] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271813] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271820] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271826] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271833] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271839] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271845] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271851] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271858] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271864] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271870] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271876] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271882] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271889] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271895] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271901] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271907] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271915] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271922] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271928] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271935] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271941] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.271948] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.271954] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271961] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271967] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271973] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271980] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271986] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.271992] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.271999] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.272005] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.272019] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.272026] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.272032] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.272039] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.272045] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.272052] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.272058] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.272065] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:19:01.274 [2024-06-07 23:09:53.272072] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:19:01.274 [2024-06-07 23:09:53.272078] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.272085] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.272092] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.272099] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.272106] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.272112] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:01.274 [2024-06-07 23:09:53.272120] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:19:01.274 [2024-06-07 23:09:53.272127] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:19:07.874 23:09:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:19:07.874 [2024-06-07 23:10:00.058544] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x1e93430, err 11. Skip rescan. 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/net 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:19:07.874 23:10:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:19:08.132 [2024-06-07 23:10:00.406718] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2016820/0x1e51ee0) succeed. 00:19:08.132 [2024-06-07 23:10:00.406792] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:19:11.418 [2024-06-07 23:10:03.518639] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:19:11.418 [2024-06-07 23:10:03.518672] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:19:11.418 [2024-06-07 23:10:03.518689] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:19:11.418 [2024-06-07 23:10:03.518703] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:19:11.418 23:10:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 952558 00:20:19.112 0 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 952322 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 952322 ']' 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 952322 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 952322 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 952322' 00:20:19.112 killing process with pid 952322 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 952322 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 952322 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:20:19.112 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:19.112 [2024-06-07 23:09:36.922004] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:20:19.112 [2024-06-07 23:09:36.922065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952322 ] 00:20:19.112 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.112 [2024-06-07 23:09:36.975702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.112 [2024-06-07 23:09:37.048396] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.112 Running I/O for 90 seconds... 00:20:19.112 [2024-06-07 23:09:43.059055] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:19.112 [2024-06-07 23:09:43.059090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.112 [2024-06-07 23:09:43.059100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.112 [2024-06-07 23:09:43.059109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.112 [2024-06-07 23:09:43.059115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.112 [2024-06-07 23:09:43.059123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.112 [2024-06-07 23:09:43.059129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.112 [2024-06-07 23:09:43.059136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.112 [2024-06-07 23:09:43.059142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.112 [2024-06-07 23:09:43.062020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.112 [2024-06-07 23:09:43.062032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.112 [2024-06-07 23:09:43.062052] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:19.112 [2024-06-07 23:09:43.069053] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.079077] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.089105] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.099131] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.109157] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.119185] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.129212] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.139240] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.149266] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.159292] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.169317] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.179344] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.189370] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.199397] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.209423] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.219449] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.229477] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.239502] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.249528] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.259553] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.269580] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.279607] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.289635] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.299660] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.310182] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.320210] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.330237] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.340439] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.350464] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.360896] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.370922] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.380949] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.391078] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.401096] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.411125] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.421654] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.432374] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.442737] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.112 [2024-06-07 23:09:43.452912] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.463896] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.474108] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.484133] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.494469] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.504493] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.514519] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.524546] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.534649] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.544916] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.554941] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.565118] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.575774] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.585944] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.596857] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.606958] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.617030] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.627164] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.637292] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.647386] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.657610] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.667721] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.677856] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.687883] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.697908] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.707977] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.718129] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.728946] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.739167] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.749191] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.760007] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.770528] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.780555] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.790657] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.800795] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.810820] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.820846] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.831409] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.841437] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.851465] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.861490] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.871518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.881544] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.891571] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.901596] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.912262] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.922475] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.932504] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.942540] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.952676] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.962704] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.972783] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.982900] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:43.993030] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.003177] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.013505] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.023590] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.033630] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.043681] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.053879] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.064084] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.113 [2024-06-07 23:09:44.064517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:205048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:205056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:205064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:205072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:205080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:205088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:205096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:205104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:205112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:205120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:205128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x180600 00:20:19.113 [2024-06-07 23:09:44.064687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.113 [2024-06-07 23:09:44.064695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:205136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:205144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:205152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:205160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:205168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:205176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:205184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:205192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:205200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:205208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:205216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:205224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:205232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:205240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:205248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:205256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:205264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:205272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:205280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:205288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.064987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:205296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.064994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:205304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:205312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:205320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:205328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:205336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:205344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:205352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:205360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:205368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:205376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:205384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:205392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:205400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:205408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:205416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.114 [2024-06-07 23:09:44.065228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:205424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x180600 00:20:19.114 [2024-06-07 23:09:44.065234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:205432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:205440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:205448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:205456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:205464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:205472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:205480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:205488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:205496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:205504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:205512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:205520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:205528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:205536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:205544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:205552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:205560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:205568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:205576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:205584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:205592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:205600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:205608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:205616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:205624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:205632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:205640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:205648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:205656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:205664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:205672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:205680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:205688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:205696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:205704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x180600 00:20:19.115 [2024-06-07 23:09:44.065742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.115 [2024-06-07 23:09:44.065750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:205712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:205720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:205728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:205736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:205744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:205752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:205760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:205768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:205776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:205784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:205792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:205800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:205808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:205816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x180600 00:20:19.116 [2024-06-07 23:09:44.065944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:205824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.065959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:205832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.065973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:205840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.065988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.065995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:205848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:205856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:205864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:205872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:205880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:205888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:205896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:205904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:205912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:205920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:205928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:205936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:205944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.116 [2024-06-07 23:09:44.066186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:205952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.116 [2024-06-07 23:09:44.066192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:205960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:205968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:205976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:205984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:205992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:206000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:206008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:206016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:206024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:206032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:206040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:206048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.066373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:206056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.117 [2024-06-07 23:09:44.066382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.079223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.117 [2024-06-07 23:09:44.079236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.117 [2024-06-07 23:09:44.079244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:206064 len:8 PRP1 0x0 PRP2 0x0 00:20:19.117 [2024-06-07 23:09:44.079251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.117 [2024-06-07 23:09:44.080723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.117 [2024-06-07 23:09:44.081024] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:19.117 [2024-06-07 23:09:44.081037] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.117 [2024-06-07 23:09:44.081043] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:19.117 [2024-06-07 23:09:44.081057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.117 [2024-06-07 23:09:44.081065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.117 [2024-06-07 23:09:44.081075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:19.117 [2024-06-07 23:09:44.081081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:19.117 [2024-06-07 23:09:44.081089] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:19.117 [2024-06-07 23:09:44.081107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.117 [2024-06-07 23:09:44.081113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.117 [2024-06-07 23:09:45.086670] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:19.117 [2024-06-07 23:09:45.086701] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.117 [2024-06-07 23:09:45.086707] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:19.117 [2024-06-07 23:09:45.086741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.117 [2024-06-07 23:09:45.086749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.117 [2024-06-07 23:09:45.086766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:19.117 [2024-06-07 23:09:45.086774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:19.117 [2024-06-07 23:09:45.086781] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:19.117 [2024-06-07 23:09:45.086801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.117 [2024-06-07 23:09:45.086809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.117 [2024-06-07 23:09:46.089340] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:19.117 [2024-06-07 23:09:46.089372] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.117 [2024-06-07 23:09:46.089383] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:19.117 [2024-06-07 23:09:46.089400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.117 [2024-06-07 23:09:46.089408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.117 [2024-06-07 23:09:46.089419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:19.117 [2024-06-07 23:09:46.089425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:19.117 [2024-06-07 23:09:46.089432] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:19.117 [2024-06-07 23:09:46.089453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.117 [2024-06-07 23:09:46.089460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.117 [2024-06-07 23:09:48.094917] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.117 [2024-06-07 23:09:48.094949] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:19.117 [2024-06-07 23:09:48.094971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.117 [2024-06-07 23:09:48.094979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.117 [2024-06-07 23:09:48.095499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:19.117 [2024-06-07 23:09:48.095509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:19.117 [2024-06-07 23:09:48.095516] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:19.117 [2024-06-07 23:09:48.095608] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.117 [2024-06-07 23:09:48.095618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.117 [2024-06-07 23:09:50.101034] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.117 [2024-06-07 23:09:50.101060] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:19.117 [2024-06-07 23:09:50.101084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.117 [2024-06-07 23:09:50.101093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.117 [2024-06-07 23:09:50.101106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:19.117 [2024-06-07 23:09:50.101114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:19.117 [2024-06-07 23:09:50.101122] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:19.117 [2024-06-07 23:09:50.101144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.117 [2024-06-07 23:09:50.101155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.117 [2024-06-07 23:09:52.106107] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.117 [2024-06-07 23:09:52.106133] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:19.117 [2024-06-07 23:09:52.106153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.117 [2024-06-07 23:09:52.106161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:19.117 [2024-06-07 23:09:52.106176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:19.118 [2024-06-07 23:09:52.106182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:19.118 [2024-06-07 23:09:52.106189] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:19.118 [2024-06-07 23:09:52.106209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.118 [2024-06-07 23:09:52.106216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:19.118 [2024-06-07 23:09:53.172379] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.118 [2024-06-07 23:09:53.264298] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:19.118 [2024-06-07 23:09:53.264321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.118 [2024-06-07 23:09:53.264330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.118 [2024-06-07 23:09:53.264337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.118 [2024-06-07 23:09:53.264344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.118 [2024-06-07 23:09:53.264351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.118 [2024-06-07 23:09:53.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.118 [2024-06-07 23:09:53.264365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.118 [2024-06-07 23:09:53.264371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32508 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:20:19.118 [2024-06-07 23:09:53.266618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.118 [2024-06-07 23:09:53.266653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.118 [2024-06-07 23:09:53.266720] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:19.118 [2024-06-07 23:09:53.274309] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.284333] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.294358] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.304383] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.314410] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.324435] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.334462] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.344490] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.354518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.364544] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.374572] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.384597] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.394625] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.404651] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.414678] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.424703] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.434729] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.444756] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.454781] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.464808] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.474836] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.484863] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.494888] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.504915] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.514941] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.524968] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.534995] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.545020] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.555047] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.565072] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.575097] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.585123] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.595150] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.605175] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.615200] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.625227] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.635254] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.645281] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.655307] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.665332] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.675359] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.685385] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.695411] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.705437] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.715464] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.725489] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.735516] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.745543] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.755570] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.765596] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.775621] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.785647] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.795673] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.805701] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.815727] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.825755] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.835782] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.845807] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.855832] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.865859] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.875884] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.885909] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.895934] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.905960] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.915987] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.926016] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.936041] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.946066] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.956091] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.966118] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.118 [2024-06-07 23:09:53.976145] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:53.986171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:53.996198] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.006225] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.016251] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.026276] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.036304] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.046330] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.056356] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.066384] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.076412] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.086439] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.096466] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.106490] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.116902] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.126929] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.137936] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.147954] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.158740] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.168766] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.180961] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.190997] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.201236] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.212225] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.222308] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.232577] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.242724] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.252749] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.263049] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:19.119 [2024-06-07 23:09:54.269207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.119 [2024-06-07 23:09:54.269591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.119 [2024-06-07 23:09:54.269599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.269990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.269997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.120 [2024-06-07 23:09:54.270137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.120 [2024-06-07 23:09:54.270145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.121 [2024-06-07 23:09:54.270278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.121 [2024-06-07 23:09:54.270607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bfc00 00:20:19.121 [2024-06-07 23:09:54.270614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.270992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.270999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.271007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.271017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.271025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.271032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.271040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.271046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.271054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bfc00 00:20:19.122 [2024-06-07 23:09:54.271060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:c46a0da0 sqhd:e540 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.283788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.122 [2024-06-07 23:09:54.283800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.122 [2024-06-07 23:09:54.283806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35240 len:8 PRP1 0x0 PRP2 0x0 00:20:19.122 [2024-06-07 23:09:54.283813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.122 [2024-06-07 23:09:54.283855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.122 [2024-06-07 23:09:54.285748] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:19.122 [2024-06-07 23:09:54.285764] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.122 [2024-06-07 23:09:54.285773] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.122 [2024-06-07 23:09:54.285786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.122 [2024-06-07 23:09:54.285793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.122 [2024-06-07 23:09:54.285831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.122 [2024-06-07 23:09:54.285837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.122 [2024-06-07 23:09:54.285844] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:09:54.285861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:09:54.285867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:09:55.288384] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:19.123 [2024-06-07 23:09:55.288418] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.123 [2024-06-07 23:09:55.288424] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.123 [2024-06-07 23:09:55.288443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.123 [2024-06-07 23:09:55.288451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.123 [2024-06-07 23:09:55.288461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.123 [2024-06-07 23:09:55.288468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.123 [2024-06-07 23:09:55.288475] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:09:55.288494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:09:55.288501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:09:56.291128] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:19.123 [2024-06-07 23:09:56.291159] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.123 [2024-06-07 23:09:56.291166] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.123 [2024-06-07 23:09:56.291200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.123 [2024-06-07 23:09:56.291208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.123 [2024-06-07 23:09:56.291219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.123 [2024-06-07 23:09:56.291225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.123 [2024-06-07 23:09:56.291233] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:09:56.291253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:09:56.291261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:09:58.296894] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.123 [2024-06-07 23:09:58.296934] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.123 [2024-06-07 23:09:58.296955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.123 [2024-06-07 23:09:58.296963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.123 [2024-06-07 23:09:58.296999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.123 [2024-06-07 23:09:58.297007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.123 [2024-06-07 23:09:58.297020] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:09:58.297055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:09:58.297062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:10:00.304070] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.123 [2024-06-07 23:10:00.304100] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.123 [2024-06-07 23:10:00.304122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.123 [2024-06-07 23:10:00.304131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.123 [2024-06-07 23:10:00.304158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.123 [2024-06-07 23:10:00.304164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.123 [2024-06-07 23:10:00.304172] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:10:00.304192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:10:00.304200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:10:02.311057] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.123 [2024-06-07 23:10:02.311093] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.123 [2024-06-07 23:10:02.311115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.123 [2024-06-07 23:10:02.311123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.123 [2024-06-07 23:10:02.312783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.123 [2024-06-07 23:10:02.312798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.123 [2024-06-07 23:10:02.312805] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:10:02.312827] bdev_nvme.c:2884:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:20:19.123 [2024-06-07 23:10:02.313579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:10:02.313633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:10:03.317681] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:19.123 [2024-06-07 23:10:03.317720] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:20:19.123 [2024-06-07 23:10:03.317743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:19.123 [2024-06-07 23:10:03.317756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:19.123 [2024-06-07 23:10:03.318517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:20:19.123 [2024-06-07 23:10:03.318533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:20:19.123 [2024-06-07 23:10:03.318540] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:20:19.123 [2024-06-07 23:10:03.318577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.123 [2024-06-07 23:10:03.318585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:19.123 [2024-06-07 23:10:04.375312] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.123 00:20:19.123 Latency(us) 00:20:19.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.123 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.123 Verification LBA range: start 0x0 length 0x8000 00:20:19.123 Nvme_mlx_0_0n1 : 90.01 11047.71 43.16 0.00 0.00 11564.17 2090.91 11056984.26 00:20:19.123 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.123 Verification LBA range: start 0x0 length 0x8000 00:20:19.123 Nvme_mlx_0_1n1 : 90.01 9570.58 37.39 0.00 0.00 13355.32 2418.59 12079595.52 00:20:19.123 =================================================================================================================== 00:20:19.123 Total : 20618.29 80.54 0.00 0.00 12395.58 2090.91 12079595.52 00:20:19.123 Received shutdown signal, test time was about 90.000000 seconds 00:20:19.123 00:20:19.123 Latency(us) 00:20:19.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.123 =================================================================================================================== 00:20:19.123 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 952060 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 952060 ']' 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 952060 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 952060 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 952060' 00:20:19.123 killing process with pid 952060 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 952060 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 952060 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:20:19.123 00:20:19.123 real 1m33.095s 00:20:19.123 user 4m25.330s 00:20:19.123 sys 0m4.171s 00:20:19.123 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 ************************************ 00:20:19.124 END TEST nvmf_device_removal_pci_remove_no_srq 00:20:19.124 ************************************ 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 ************************************ 00:20:19.124 START TEST nvmf_device_removal_pci_remove 00:20:19.124 ************************************ 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1124 -- # test_remove_and_rescan 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=967463 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 967463 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 967463 ']' 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:19.124 23:11:08 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 [2024-06-07 23:11:08.946576] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:20:19.124 [2024-06-07 23:11:08.946616] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.124 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.124 [2024-06-07 23:11:09.007080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:19.124 [2024-06-07 23:11:09.086860] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.124 [2024-06-07 23:11:09.086896] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.124 [2024-06-07 23:11:09.086904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.124 [2024-06-07 23:11:09.086910] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.124 [2024-06-07 23:11:09.086915] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.124 [2024-06-07 23:11:09.086956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.124 [2024-06-07 23:11:09.086958] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 [2024-06-07 23:11:09.814421] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15a7360/0x15ab850) succeed. 00:20:19.124 [2024-06-07 23:11:09.823278] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15a8860/0x15ecee0) succeed. 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.124 23:11:09 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 [2024-06-07 23:11:10.010665] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 [2024-06-07 23:11:10.085386] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=967730 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 967730 /var/tmp/bdevperf.sock 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 967730 ']' 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:20:19.125 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:20:19.126 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.126 23:11:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.126 Nvme_mlx_0_0n1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:19.126 Nvme_mlx_0_1n1 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=967966 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:20:19.126 23:11:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/infiniband 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.396 mlx5_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:20:24.396 23:11:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:20:24.396 [2024-06-07 23:11:16.300923] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:20:24.396 [2024-06-07 23:11:16.301507] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:24.396 [2024-06-07 23:11:16.307109] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:24.396 [2024-06-07 23:11:16.307134] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:20:30.962 23:11:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:20:30.962 23:11:22 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:20:30.962 [2024-06-07 23:11:22.962924] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x15a8060, err 11. Skip rescan. 00:20:30.962 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:20:30.962 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:20:30.962 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/net 00:20:30.962 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:20:30.962 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:20:30.963 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:20:30.963 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:20:30.963 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:20:30.963 23:11:23 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:20:31.220 [2024-06-07 23:11:23.349555] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15aa280/0x15ab850) succeed. 00:20:31.220 [2024-06-07 23:11:23.349616] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:34.507 [2024-06-07 23:11:26.363842] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:34.507 [2024-06-07 23:11:26.363871] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:20:34.507 [2024-06-07 23:11:26.363884] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:34.507 [2024-06-07 23:11:26.363894] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/infiniband 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:34.507 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.508 mlx5_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:20:34.508 23:11:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:20:34.508 [2024-06-07 23:11:26.528325] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:20:34.508 [2024-06-07 23:11:26.528389] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:34.508 [2024-06-07 23:11:26.537777] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:34.508 [2024-06-07 23:11:26.537792] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:20:41.116 23:11:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:20:41.116 [2024-06-07 23:11:33.150953] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x15947d0, err 11. Skip rescan. 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/net 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:20:41.116 23:11:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:20:41.375 [2024-06-07 23:11:33.516780] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15aa670/0x15ecee0) succeed. 00:20:41.375 [2024-06-07 23:11:33.516856] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:44.659 [2024-06-07 23:11:36.609711] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:44.659 [2024-06-07 23:11:36.609746] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:20:44.659 [2024-06-07 23:11:36.609761] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:44.659 [2024-06-07 23:11:36.609775] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:20:44.659 23:11:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 967966 00:21:52.377 0 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 967730 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 967730 ']' 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 967730 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 967730 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 967730' 00:21:52.378 killing process with pid 967730 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 967730 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 967730 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:21:52.378 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:21:52.378 [2024-06-07 23:11:10.137264] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:21:52.378 [2024-06-07 23:11:10.137307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid967730 ] 00:21:52.378 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.378 [2024-06-07 23:11:10.190791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.378 [2024-06-07 23:11:10.263534] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.378 Running I/O for 90 seconds... 00:21:52.378 [2024-06-07 23:11:16.307191] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:52.378 [2024-06-07 23:11:16.307219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.378 [2024-06-07 23:11:16.307229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.378 [2024-06-07 23:11:16.307236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.378 [2024-06-07 23:11:16.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.378 [2024-06-07 23:11:16.307250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.378 [2024-06-07 23:11:16.307256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.378 [2024-06-07 23:11:16.307264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.378 [2024-06-07 23:11:16.307270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.378 [2024-06-07 23:11:16.311078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.378 [2024-06-07 23:11:16.311093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.378 [2024-06-07 23:11:16.311119] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:52.378 [2024-06-07 23:11:16.317194] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.327218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.337307] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.347352] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.357376] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.367417] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.377583] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.387860] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.398401] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.408798] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.418924] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.429296] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.440083] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.450203] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.460486] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.471001] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.481280] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.491573] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.501838] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.512171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.522661] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.533317] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.543344] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.553370] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.563519] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.573918] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.584246] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.594517] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.604944] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.615224] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.625531] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.636096] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.646426] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.656455] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.666479] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.676590] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.686792] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.697170] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.707531] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.718084] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.728233] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.738508] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.748773] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.759196] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.769500] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.779779] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.790139] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.800477] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.810885] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.821171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.831400] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.841657] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.852035] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.862331] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.872688] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.883034] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.893418] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.378 [2024-06-07 23:11:16.903740] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.914081] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.924378] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.934797] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.945080] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.955420] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.965767] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.976044] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.986301] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:16.996588] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.006983] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.017306] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.027459] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.037791] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.048114] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.058487] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.068776] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.079040] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.089355] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.099600] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.109888] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.120142] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.130393] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.140608] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.150951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.161272] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.171584] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.181755] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.192093] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.202364] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.212534] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.222757] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.233066] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.243485] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.253723] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.264311] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.274528] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.284824] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.295018] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.305356] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.379 [2024-06-07 23:11:17.313587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:199368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:199376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:199384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:199392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:199400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:199408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:199416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:199424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:199432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:199440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:199448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:199456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:199464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:199472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:199480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:199488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:199496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:199504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:199512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:199520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:199528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.379 [2024-06-07 23:11:17.313902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.379 [2024-06-07 23:11:17.313910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:199536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.313916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.313924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:199544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.313930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.313938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:199552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.313944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.313951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:199560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.313958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.313966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:199568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.313973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.313981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:199576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.313988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.313996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:199584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:199592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:199600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:199608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:199616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:199624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:199632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:199640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:199648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:199656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:199664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:199672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.380 [2024-06-07 23:11:17.314162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:198656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:198664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:198672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:198680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:198688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:198696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:198704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:198712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:198720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:198728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:198736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:198744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:198752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:198760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:198768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:198776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:198784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:198792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:198800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.380 [2024-06-07 23:11:17.314439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:198808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1810ef 00:21:52.380 [2024-06-07 23:11:17.314445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:198816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:198824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:198832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:198840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:198848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:198856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:198864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:198872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:198880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:198888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:198896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:198904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:198912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:198920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:198928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:198936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:198944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:198952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:198960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:198968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:198976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:198984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:198992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:199000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:199008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:199016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:199024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:199032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:199040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:199048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:199056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:199064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:199072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:199080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:199088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:199096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1810ef 00:21:52.381 [2024-06-07 23:11:17.314969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.381 [2024-06-07 23:11:17.314977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:199104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.314983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.314992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:199112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.314998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:199120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:199128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:199136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:199144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:199152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:199160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:199168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:199176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:199184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:199192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:199200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:199208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:199216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:199224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:199232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:199240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:199248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:199256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:199264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:199272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:199280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:199288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:199296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:199304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:199312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:199320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1810ef 00:21:52.382 [2024-06-07 23:11:17.315380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.382 [2024-06-07 23:11:17.315389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:199328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1810ef 00:21:52.383 [2024-06-07 23:11:17.315395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:17.324185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:199336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1810ef 00:21:52.383 [2024-06-07 23:11:17.324195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:17.324204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:199344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1810ef 00:21:52.383 [2024-06-07 23:11:17.324211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:17.324219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:199352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1810ef 00:21:52.383 [2024-06-07 23:11:17.324225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:17.336897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:52.383 [2024-06-07 23:11:17.336910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.383 [2024-06-07 23:11:17.336917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:199360 len:8 PRP1 0x0 PRP2 0x0 00:21:52.383 [2024-06-07 23:11:17.336926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:17.340024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:17.340323] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:52.383 [2024-06-07 23:11:17.340336] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.383 [2024-06-07 23:11:17.340342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:52.383 [2024-06-07 23:11:17.340357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:17.340365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.383 [2024-06-07 23:11:17.340385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:52.383 [2024-06-07 23:11:17.340392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:52.383 [2024-06-07 23:11:17.340400] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:52.383 [2024-06-07 23:11:17.340417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.383 [2024-06-07 23:11:17.340424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:18.342938] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:52.383 [2024-06-07 23:11:18.342971] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.383 [2024-06-07 23:11:18.342977] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:52.383 [2024-06-07 23:11:18.342995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:18.343003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.383 [2024-06-07 23:11:18.343017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:52.383 [2024-06-07 23:11:18.343024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:52.383 [2024-06-07 23:11:18.343032] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:52.383 [2024-06-07 23:11:18.343051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.383 [2024-06-07 23:11:18.343058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:19.346430] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:52.383 [2024-06-07 23:11:19.346463] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.383 [2024-06-07 23:11:19.346470] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:52.383 [2024-06-07 23:11:19.346488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:19.346496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.383 [2024-06-07 23:11:19.346506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:52.383 [2024-06-07 23:11:19.346517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:52.383 [2024-06-07 23:11:19.346524] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:52.383 [2024-06-07 23:11:19.347388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.383 [2024-06-07 23:11:19.347401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:21.353072] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.383 [2024-06-07 23:11:21.353107] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:52.383 [2024-06-07 23:11:21.353129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:21.353138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.383 [2024-06-07 23:11:21.353158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:52.383 [2024-06-07 23:11:21.353164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:52.383 [2024-06-07 23:11:21.353172] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:52.383 [2024-06-07 23:11:21.353200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.383 [2024-06-07 23:11:21.353208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:23.358348] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.383 [2024-06-07 23:11:23.358371] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:52.383 [2024-06-07 23:11:23.358393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:23.358401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.383 [2024-06-07 23:11:23.358412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:52.383 [2024-06-07 23:11:23.358419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:52.383 [2024-06-07 23:11:23.358426] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:52.383 [2024-06-07 23:11:23.358444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.383 [2024-06-07 23:11:23.358452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:25.363399] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.383 [2024-06-07 23:11:25.363424] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:52.383 [2024-06-07 23:11:25.363443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:25.363451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:21:52.383 [2024-06-07 23:11:25.363462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:21:52.383 [2024-06-07 23:11:25.363468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:21:52.383 [2024-06-07 23:11:25.363475] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:21:52.383 [2024-06-07 23:11:25.363492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.383 [2024-06-07 23:11:25.363503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:21:52.383 [2024-06-07 23:11:26.449913] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.383 [2024-06-07 23:11:26.533154] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:52.383 [2024-06-07 23:11:26.533178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.383 [2024-06-07 23:11:26.533188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:26.533195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.383 [2024-06-07 23:11:26.533202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:26.533209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.383 [2024-06-07 23:11:26.533215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:26.533222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.383 [2024-06-07 23:11:26.533228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:16 sqhd:33b9 p:0 m:0 dnr:0 00:21:52.383 [2024-06-07 23:11:26.535231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.383 [2024-06-07 23:11:26.535268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.383 [2024-06-07 23:11:26.535312] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:21:52.383 [2024-06-07 23:11:26.543167] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.383 [2024-06-07 23:11:26.553190] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.383 [2024-06-07 23:11:26.563216] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.383 [2024-06-07 23:11:26.573242] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.383 [2024-06-07 23:11:26.583269] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.593297] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.603322] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.613347] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.623373] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.633399] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.643425] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.653451] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.663475] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.673502] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.683529] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.693554] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.703580] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.713607] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.723632] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.733658] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.743684] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.753711] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.763739] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.773767] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.783793] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.793820] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.803846] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.813872] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.823898] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.833924] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.843951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.853978] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.864003] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.874027] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.884055] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.894081] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.904108] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.914134] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.924161] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.934187] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.944213] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.954241] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.964269] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.974293] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.984318] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:26.994345] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.004373] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.014400] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.024427] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.034454] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.044481] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.054506] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.064534] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.074560] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.084587] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.094615] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.104642] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.114667] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.124692] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.134717] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.144743] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.154769] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.164795] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.174822] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.184848] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.194873] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.204900] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.214928] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.224953] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.234979] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.245005] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.255140] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.265165] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.275192] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.285218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.295244] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.305272] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.315299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.325324] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.335349] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.345374] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.355400] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.365425] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.375518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.385655] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.395868] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.406652] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.416677] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.426818] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.436845] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.447096] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.384 [2024-06-07 23:11:27.457924] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.467971] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.478120] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.488737] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.498764] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.509273] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.519299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.530099] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.385 [2024-06-07 23:11:27.537954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.537973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.537986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.537993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.385 [2024-06-07 23:11:27.538437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.385 [2024-06-07 23:11:27.538443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.386 [2024-06-07 23:11:27.538987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.386 [2024-06-07 23:11:27.538995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.387 [2024-06-07 23:11:27.539001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.387 [2024-06-07 23:11:27.539012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.387 [2024-06-07 23:11:27.539019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.388 [2024-06-07 23:11:27.539417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.388 [2024-06-07 23:11:27.539424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.389 [2024-06-07 23:11:27.539595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf0ef 00:21:52.389 [2024-06-07 23:11:27.539789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32556 cdw0:819f0c60 sqhd:8540 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.552549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:52.389 [2024-06-07 23:11:27.552564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.389 [2024-06-07 23:11:27.552570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16488 len:8 PRP1 0x0 PRP2 0x0 00:21:52.389 [2024-06-07 23:11:27.552577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.389 [2024-06-07 23:11:27.552622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.389 [2024-06-07 23:11:27.554455] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:52.389 [2024-06-07 23:11:27.554474] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.389 [2024-06-07 23:11:27.554480] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.389 [2024-06-07 23:11:27.554494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.389 [2024-06-07 23:11:27.554502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.389 [2024-06-07 23:11:27.554522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.389 [2024-06-07 23:11:27.554528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.389 [2024-06-07 23:11:27.554536] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.389 [2024-06-07 23:11:27.554554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.389 [2024-06-07 23:11:27.554560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.389 [2024-06-07 23:11:28.557883] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:52.389 [2024-06-07 23:11:28.557921] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.389 [2024-06-07 23:11:28.557928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.389 [2024-06-07 23:11:28.557945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.389 [2024-06-07 23:11:28.557958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.389 [2024-06-07 23:11:28.557968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.389 [2024-06-07 23:11:28.557975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.389 [2024-06-07 23:11:28.557982] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.389 [2024-06-07 23:11:28.558001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.389 [2024-06-07 23:11:28.558014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.389 [2024-06-07 23:11:29.563366] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:21:52.389 [2024-06-07 23:11:29.563401] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.389 [2024-06-07 23:11:29.563408] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.389 [2024-06-07 23:11:29.563426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.389 [2024-06-07 23:11:29.563434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.390 [2024-06-07 23:11:29.563451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.390 [2024-06-07 23:11:29.563457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.390 [2024-06-07 23:11:29.563465] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.390 [2024-06-07 23:11:29.563487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.390 [2024-06-07 23:11:29.563495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.390 [2024-06-07 23:11:31.568881] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.390 [2024-06-07 23:11:31.568918] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.390 [2024-06-07 23:11:31.568942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.390 [2024-06-07 23:11:31.568950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.390 [2024-06-07 23:11:31.568972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.390 [2024-06-07 23:11:31.568979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.390 [2024-06-07 23:11:31.568986] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.390 [2024-06-07 23:11:31.569023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.390 [2024-06-07 23:11:31.569031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.390 [2024-06-07 23:11:33.574734] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.390 [2024-06-07 23:11:33.574768] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.390 [2024-06-07 23:11:33.574793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.390 [2024-06-07 23:11:33.574801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.390 [2024-06-07 23:11:33.574812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.390 [2024-06-07 23:11:33.574823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.390 [2024-06-07 23:11:33.574830] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.390 [2024-06-07 23:11:33.574851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.390 [2024-06-07 23:11:33.574858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.390 [2024-06-07 23:11:35.579823] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.390 [2024-06-07 23:11:35.579861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.390 [2024-06-07 23:11:35.579883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.390 [2024-06-07 23:11:35.579893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.390 [2024-06-07 23:11:35.579904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.390 [2024-06-07 23:11:35.579911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.390 [2024-06-07 23:11:35.579918] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.390 [2024-06-07 23:11:35.579938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.390 [2024-06-07 23:11:35.579946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.390 [2024-06-07 23:11:37.585234] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:21:52.390 [2024-06-07 23:11:37.585283] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:21:52.390 [2024-06-07 23:11:37.585326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:52.390 [2024-06-07 23:11:37.585335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:21:52.390 [2024-06-07 23:11:37.586188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:21:52.390 [2024-06-07 23:11:37.586201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:21:52.390 [2024-06-07 23:11:37.586209] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:21:52.390 [2024-06-07 23:11:37.586250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.390 [2024-06-07 23:11:37.586259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:21:52.390 [2024-06-07 23:11:38.637960] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.390 00:21:52.390 Latency(us) 00:21:52.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.390 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:52.390 Verification LBA range: start 0x0 length 0x8000 00:21:52.390 Nvme_mlx_0_0n1 : 90.01 11193.67 43.73 0.00 0.00 11411.96 1022.05 11056984.26 00:21:52.390 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:52.390 Verification LBA range: start 0x0 length 0x8000 00:21:52.390 Nvme_mlx_0_1n1 : 90.01 9495.02 37.09 0.00 0.00 13455.49 2449.80 13038293.58 00:21:52.390 =================================================================================================================== 00:21:52.390 Total : 20688.69 80.82 0.00 0.00 12349.85 1022.05 13038293.58 00:21:52.390 Received shutdown signal, test time was about 90.000000 seconds 00:21:52.390 00:21:52.390 Latency(us) 00:21:52.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.390 =================================================================================================================== 00:21:52.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 967463 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 967463 ']' 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 967463 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 967463 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 967463' 00:21:52.390 killing process with pid 967463 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 967463 00:21:52.390 23:12:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 967463 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:21:52.390 00:21:52.390 real 1m33.211s 00:21:52.390 user 4m25.606s 00:21:52.390 sys 0m4.136s 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:21:52.390 ************************************ 00:21:52.390 END TEST nvmf_device_removal_pci_remove 00:21:52.390 ************************************ 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:52.390 rmmod nvme_rdma 00:21:52.390 rmmod nvme_fabrics 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.390 23:12:42 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:52.391 23:12:42 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:21:52.391 23:12:42 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:21:52.391 23:12:42 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:21:52.391 00:21:52.391 real 3m12.729s 00:21:52.391 user 8m52.961s 00:21:52.391 sys 0m12.866s 00:21:52.391 23:12:42 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:52.391 23:12:42 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:21:52.391 ************************************ 00:21:52.391 END TEST nvmf_device_removal 00:21:52.391 ************************************ 00:21:52.391 23:12:42 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:52.391 23:12:42 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:52.391 23:12:42 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:52.391 23:12:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:52.391 ************************************ 00:21:52.391 START TEST nvmf_srq_overwhelm 00:21:52.391 ************************************ 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:52.391 * Looking for test storage... 00:21:52.391 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.391 23:12:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:55.678 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:55.678 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:55.678 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:55.679 Found net devices under 0000:da:00.0: mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:55.679 Found net devices under 0000:da:00.1: mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:55.679 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:55.679 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:55.679 altname enp218s0f0np0 00:21:55.679 altname ens818f0np0 00:21:55.679 inet 192.168.100.8/24 scope global mlx_0_0 00:21:55.679 valid_lft forever preferred_lft forever 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:55.679 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:55.679 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:55.679 altname enp218s0f1np1 00:21:55.679 altname ens818f1np1 00:21:55.679 inet 192.168.100.9/24 scope global mlx_0_1 00:21:55.679 valid_lft forever preferred_lft forever 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:55.679 192.168.100.9' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:55.679 192.168.100.9' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:55.679 192.168.100.9' 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:21:55.679 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=986709 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 986709 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@830 -- # '[' -z 986709 ']' 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:55.680 23:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:55.680 [2024-06-07 23:12:47.945241] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:21:55.680 [2024-06-07 23:12:47.945293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.940 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.940 [2024-06-07 23:12:48.007074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.940 [2024-06-07 23:12:48.080693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.940 [2024-06-07 23:12:48.080734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.940 [2024-06-07 23:12:48.080741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.940 [2024-06-07 23:12:48.080749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.940 [2024-06-07 23:12:48.080754] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.940 [2024-06-07 23:12:48.080819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.940 [2024-06-07 23:12:48.080915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.940 [2024-06-07 23:12:48.081007] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.940 [2024-06-07 23:12:48.081013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@863 -- # return 0 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.509 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:56.768 [2024-06-07 23:12:48.810533] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18df9d0/0x18e3ec0) succeed. 00:21:56.768 [2024-06-07 23:12:48.819644] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e1010/0x1925550) succeed. 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:56.768 Malloc0 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:56.768 [2024-06-07 23:12:48.914328] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.768 23:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme0n1 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme0n1 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.704 Malloc1 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.704 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.962 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.962 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:57.962 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.962 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.962 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.962 23:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme1n1 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme1n1 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:58.898 23:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:58.898 Malloc2 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.898 23:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme2n1 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme2n1 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:59.834 Malloc3 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.834 23:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme3n1 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme3n1 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.840 Malloc4 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.840 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:01.099 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.099 23:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme4n1 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme4n1 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:02.034 Malloc5 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.034 23:12:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme5n1 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme5n1 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:22:02.969 23:12:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:22:02.969 [global] 00:22:02.969 thread=1 00:22:02.969 invalidate=1 00:22:02.969 rw=read 00:22:02.969 time_based=1 00:22:02.969 runtime=10 00:22:02.969 ioengine=libaio 00:22:02.969 direct=1 00:22:02.969 bs=1048576 00:22:02.969 iodepth=128 00:22:02.969 norandommap=1 00:22:02.969 numjobs=13 00:22:02.969 00:22:02.969 [job0] 00:22:02.969 filename=/dev/nvme0n1 00:22:02.969 [job1] 00:22:02.969 filename=/dev/nvme1n1 00:22:02.969 [job2] 00:22:02.969 filename=/dev/nvme2n1 00:22:02.969 [job3] 00:22:02.969 filename=/dev/nvme3n1 00:22:02.969 [job4] 00:22:02.969 filename=/dev/nvme4n1 00:22:02.969 [job5] 00:22:02.969 filename=/dev/nvme5n1 00:22:03.237 Could not set queue depth (nvme0n1) 00:22:03.237 Could not set queue depth (nvme1n1) 00:22:03.237 Could not set queue depth (nvme2n1) 00:22:03.237 Could not set queue depth (nvme3n1) 00:22:03.237 Could not set queue depth (nvme4n1) 00:22:03.237 Could not set queue depth (nvme5n1) 00:22:03.499 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:03.499 ... 00:22:03.499 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:03.499 ... 00:22:03.499 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:03.499 ... 00:22:03.499 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:03.499 ... 00:22:03.499 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:03.499 ... 00:22:03.499 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:03.499 ... 00:22:03.499 fio-3.35 00:22:03.499 Starting 78 threads 00:22:18.362 00:22:18.362 job0: (groupid=0, jobs=1): err= 0: pid=988227: Fri Jun 7 23:13:08 2024 00:22:18.362 read: IOPS=165, BW=166MiB/s (174MB/s)(1662MiB/10035msec) 00:22:18.362 slat (usec): min=31, max=2088.2k, avg=6014.31, stdev=78076.79 00:22:18.362 clat (msec): min=34, max=5372, avg=639.60, stdev=1287.53 00:22:18.362 lat (msec): min=35, max=5380, avg=645.61, stdev=1296.25 00:22:18.362 clat percentiles (msec): 00:22:18.362 | 1.00th=[ 52], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 122], 00:22:18.362 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 161], 60.00th=[ 251], 00:22:18.362 | 70.00th=[ 253], 80.00th=[ 284], 90.00th=[ 1435], 95.00th=[ 4732], 00:22:18.362 | 99.00th=[ 5201], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5403], 00:22:18.362 | 99.99th=[ 5403] 00:22:18.362 bw ( KiB/s): min= 6156, max=1030144, per=10.46%, avg=349508.33, stdev=371257.63, samples=9 00:22:18.362 iops : min= 6, max= 1006, avg=341.22, stdev=362.48, samples=9 00:22:18.362 lat (msec) : 50=0.90%, 100=3.07%, 250=55.11%, 500=26.23%, 2000=5.42% 00:22:18.362 lat (msec) : >=2000=9.27% 00:22:18.362 cpu : usr=0.04%, sys=1.82%, ctx=1832, majf=0, minf=32769 00:22:18.362 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:22:18.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.362 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.362 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.362 job0: (groupid=0, jobs=1): err= 0: pid=988228: Fri Jun 7 23:13:08 2024 00:22:18.362 read: IOPS=2, BW=2531KiB/s (2592kB/s)(30.0MiB/12138msec) 00:22:18.362 slat (usec): min=791, max=2195.6k, avg=334429.46, stdev=742362.15 00:22:18.362 clat (msec): min=2104, max=12135, avg=9886.24, stdev=3329.48 00:22:18.363 lat (msec): min=4088, max=12137, avg=10220.67, stdev=3009.37 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 2106], 5.00th=[ 4077], 10.00th=[ 4111], 20.00th=[ 6342], 00:22:18.363 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:22:18.363 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:18.363 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.363 | 99.99th=[12147] 00:22:18.363 lat (msec) : >=2000=100.00% 00:22:18.363 cpu : usr=0.00%, sys=0.20%, ctx=93, majf=0, minf=7681 00:22:18.363 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.363 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.363 job0: (groupid=0, jobs=1): err= 0: pid=988229: Fri Jun 7 23:13:08 2024 00:22:18.363 read: IOPS=4, BW=4303KiB/s (4407kB/s)(51.0MiB/12136msec) 00:22:18.363 slat (usec): min=711, max=2077.7k, avg=196475.39, stdev=580509.30 00:22:18.363 clat (msec): min=2115, max=12128, avg=8251.60, stdev=3109.85 00:22:18.363 lat (msec): min=4185, max=12135, avg=8448.08, stdev=3029.92 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:22:18.363 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:22:18.363 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:22:18.363 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.363 | 99.99th=[12147] 00:22:18.363 lat (msec) : >=2000=100.00% 00:22:18.363 cpu : usr=0.01%, sys=0.36%, ctx=85, majf=0, minf=13057 00:22:18.363 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.363 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.363 job0: (groupid=0, jobs=1): err= 0: pid=988230: Fri Jun 7 23:13:08 2024 00:22:18.363 read: IOPS=25, BW=25.7MiB/s (26.9MB/s)(313MiB/12178msec) 00:22:18.363 slat (usec): min=102, max=2111.2k, avg=32152.56, stdev=197280.52 00:22:18.363 clat (msec): min=808, max=12015, avg=4347.87, stdev=3230.06 00:22:18.363 lat (msec): min=813, max=12028, avg=4380.03, stdev=3246.32 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 810], 5.00th=[ 818], 10.00th=[ 827], 20.00th=[ 844], 00:22:18.363 | 30.00th=[ 969], 40.00th=[ 2366], 50.00th=[ 3708], 60.00th=[ 5269], 00:22:18.363 | 70.00th=[ 6007], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9329], 00:22:18.363 | 99.00th=[10671], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:18.363 | 99.99th=[12013] 00:22:18.363 bw ( KiB/s): min= 1882, max=151552, per=1.27%, avg=42300.44, stdev=47369.51, samples=9 00:22:18.363 iops : min= 1, max= 148, avg=41.00, stdev=46.49, samples=9 00:22:18.363 lat (msec) : 1000=30.35%, 2000=0.96%, >=2000=68.69% 00:22:18.363 cpu : usr=0.02%, sys=0.94%, ctx=535, majf=0, minf=32769 00:22:18.363 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.2%, >=64=79.9% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:22:18.363 issued rwts: total=313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.363 job0: (groupid=0, jobs=1): err= 0: pid=988231: Fri Jun 7 23:13:08 2024 00:22:18.363 read: IOPS=4, BW=4291KiB/s (4394kB/s)(51.0MiB/12171msec) 00:22:18.363 slat (usec): min=678, max=2198.6k, avg=197106.93, stdev=595608.77 00:22:18.363 clat (msec): min=2117, max=12168, avg=10819.20, stdev=2514.05 00:22:18.363 lat (msec): min=4214, max=12170, avg=11016.30, stdev=2191.58 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8557], 00:22:18.363 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:22:18.363 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:18.363 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.363 | 99.99th=[12147] 00:22:18.363 lat (msec) : >=2000=100.00% 00:22:18.363 cpu : usr=0.00%, sys=0.42%, ctx=79, majf=0, minf=13057 00:22:18.363 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.363 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.363 job0: (groupid=0, jobs=1): err= 0: pid=988232: Fri Jun 7 23:13:08 2024 00:22:18.363 read: IOPS=43, BW=43.0MiB/s (45.1MB/s)(434MiB/10092msec) 00:22:18.363 slat (usec): min=47, max=2111.6k, avg=23047.60, stdev=187076.63 00:22:18.363 clat (msec): min=86, max=7027, avg=2808.48, stdev=2606.05 00:22:18.363 lat (msec): min=390, max=7033, avg=2831.53, stdev=2613.98 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 405], 20.00th=[ 464], 00:22:18.363 | 30.00th=[ 642], 40.00th=[ 877], 50.00th=[ 1804], 60.00th=[ 2072], 00:22:18.363 | 70.00th=[ 4329], 80.00th=[ 6812], 90.00th=[ 6946], 95.00th=[ 7013], 00:22:18.363 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:22:18.363 | 99.99th=[ 7013] 00:22:18.363 bw ( KiB/s): min= 2043, max=259576, per=2.09%, avg=69799.78, stdev=91062.14, samples=9 00:22:18.363 iops : min= 1, max= 253, avg=67.89, stdev=88.98, samples=9 00:22:18.363 lat (msec) : 100=0.23%, 500=22.35%, 750=12.21%, 1000=5.53%, 2000=17.97% 00:22:18.363 lat (msec) : >=2000=41.71% 00:22:18.363 cpu : usr=0.01%, sys=1.03%, ctx=443, majf=0, minf=32769 00:22:18.363 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:18.363 issued rwts: total=434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.363 job0: (groupid=0, jobs=1): err= 0: pid=988233: Fri Jun 7 23:13:08 2024 00:22:18.363 read: IOPS=25, BW=25.2MiB/s (26.4MB/s)(306MiB/12156msec) 00:22:18.363 slat (usec): min=98, max=2136.1k, avg=32846.94, stdev=192456.60 00:22:18.363 clat (msec): min=1524, max=11983, avg=4704.81, stdev=3020.96 00:22:18.363 lat (msec): min=1559, max=12022, avg=4737.66, stdev=3023.09 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 1552], 5.00th=[ 1603], 10.00th=[ 1636], 20.00th=[ 2005], 00:22:18.363 | 30.00th=[ 2072], 40.00th=[ 2165], 50.00th=[ 3574], 60.00th=[ 5738], 00:22:18.363 | 70.00th=[ 6342], 80.00th=[ 8658], 90.00th=[ 9060], 95.00th=[ 9329], 00:22:18.363 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[12013], 99.95th=[12013], 00:22:18.363 | 99.99th=[12013] 00:22:18.363 bw ( KiB/s): min= 1957, max=90112, per=1.22%, avg=40722.33, stdev=33359.22, samples=9 00:22:18.363 iops : min= 1, max= 88, avg=39.67, stdev=32.71, samples=9 00:22:18.363 lat (msec) : 2000=20.26%, >=2000=79.74% 00:22:18.363 cpu : usr=0.00%, sys=0.95%, ctx=688, majf=0, minf=32769 00:22:18.363 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:22:18.363 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.363 job0: (groupid=0, jobs=1): err= 0: pid=988234: Fri Jun 7 23:13:08 2024 00:22:18.363 read: IOPS=4, BW=4556KiB/s (4666kB/s)(54.0MiB/12136msec) 00:22:18.363 slat (usec): min=739, max=4041.3k, avg=185758.36, stdev=679039.49 00:22:18.363 clat (msec): min=2104, max=12106, avg=9854.26, stdev=2736.10 00:22:18.363 lat (msec): min=4122, max=12135, avg=10040.02, stdev=2532.99 00:22:18.363 clat percentiles (msec): 00:22:18.363 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[10402], 00:22:18.363 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:22:18.363 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:18.363 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.363 | 99.99th=[12147] 00:22:18.363 lat (msec) : >=2000=100.00% 00:22:18.363 cpu : usr=0.02%, sys=0.28%, ctx=131, majf=0, minf=13825 00:22:18.363 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:22:18.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.363 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.363 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job0: (groupid=0, jobs=1): err= 0: pid=988235: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=3, BW=3662KiB/s (3750kB/s)(36.0MiB/10066msec) 00:22:18.364 slat (usec): min=523, max=2212.1k, avg=278811.38, stdev=688819.71 00:22:18.364 clat (msec): min=27, max=10064, avg=3069.75, stdev=3417.84 00:22:18.364 lat (msec): min=72, max=10065, avg=3348.56, stdev=3568.67 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 28], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 91], 00:22:18.364 | 30.00th=[ 102], 40.00th=[ 115], 50.00th=[ 2089], 60.00th=[ 2198], 00:22:18.364 | 70.00th=[ 6611], 80.00th=[ 6611], 90.00th=[ 8658], 95.00th=[10000], 00:22:18.364 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:22:18.364 | 99.99th=[10000] 00:22:18.364 lat (msec) : 50=2.78%, 100=25.00%, 250=13.89%, >=2000=58.33% 00:22:18.364 cpu : usr=0.01%, sys=0.23%, ctx=92, majf=0, minf=9217 00:22:18.364 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.364 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job0: (groupid=0, jobs=1): err= 0: pid=988236: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=40, BW=40.2MiB/s (42.2MB/s)(405MiB/10072msec) 00:22:18.364 slat (usec): min=32, max=2095.7k, avg=24695.21, stdev=186563.12 00:22:18.364 clat (msec): min=66, max=7247, avg=2641.99, stdev=2774.79 00:22:18.364 lat (msec): min=72, max=7252, avg=2666.69, stdev=2777.89 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 87], 5.00th=[ 659], 10.00th=[ 659], 20.00th=[ 667], 00:22:18.364 | 30.00th=[ 667], 40.00th=[ 667], 50.00th=[ 701], 60.00th=[ 944], 00:22:18.364 | 70.00th=[ 4396], 80.00th=[ 6946], 90.00th=[ 7080], 95.00th=[ 7148], 00:22:18.364 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:22:18.364 | 99.99th=[ 7215] 00:22:18.364 bw ( KiB/s): min=12288, max=196608, per=2.68%, avg=89429.33, stdev=85827.25, samples=6 00:22:18.364 iops : min= 12, max= 192, avg=87.33, stdev=83.82, samples=6 00:22:18.364 lat (msec) : 100=1.48%, 250=2.47%, 750=53.33%, 1000=4.94%, 2000=1.23% 00:22:18.364 lat (msec) : >=2000=36.54% 00:22:18.364 cpu : usr=0.03%, sys=1.23%, ctx=351, majf=0, minf=32769 00:22:18.364 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:18.364 issued rwts: total=405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job0: (groupid=0, jobs=1): err= 0: pid=988237: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=2, BW=2196KiB/s (2249kB/s)(26.0MiB/12122msec) 00:22:18.364 slat (usec): min=696, max=2208.8k, avg=385041.08, stdev=801091.43 00:22:18.364 clat (msec): min=2109, max=12117, avg=10057.55, stdev=2750.46 00:22:18.364 lat (msec): min=4223, max=12121, avg=10442.59, stdev=2248.23 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8557], 00:22:18.364 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:22:18.364 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:22:18.364 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.364 | 99.99th=[12147] 00:22:18.364 lat (msec) : >=2000=100.00% 00:22:18.364 cpu : usr=0.00%, sys=0.16%, ctx=60, majf=0, minf=6657 00:22:18.364 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.364 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job0: (groupid=0, jobs=1): err= 0: pid=988238: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=1, BW=1860KiB/s (1905kB/s)(22.0MiB/12110msec) 00:22:18.364 slat (msec): min=4, max=2123, avg=454.74, stdev=845.09 00:22:18.364 clat (msec): min=2104, max=12094, avg=8939.61, stdev=3553.37 00:22:18.364 lat (msec): min=2114, max=12108, avg=9394.34, stdev=3265.51 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 6342], 00:22:18.364 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:22:18.364 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:22:18.364 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.364 | 99.99th=[12147] 00:22:18.364 lat (msec) : >=2000=100.00% 00:22:18.364 cpu : usr=0.00%, sys=0.11%, ctx=75, majf=0, minf=5633 00:22:18.364 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.364 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job0: (groupid=0, jobs=1): err= 0: pid=988239: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(228MiB/12147msec) 00:22:18.364 slat (usec): min=491, max=2794.5k, avg=43999.31, stdev=250688.58 00:22:18.364 clat (msec): min=2103, max=6373, avg=5197.33, stdev=1006.63 00:22:18.364 lat (msec): min=2113, max=9168, avg=5241.33, stdev=1016.34 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 2123], 5.00th=[ 3775], 10.00th=[ 3977], 20.00th=[ 4245], 00:22:18.364 | 30.00th=[ 4597], 40.00th=[ 5470], 50.00th=[ 5537], 60.00th=[ 5671], 00:22:18.364 | 70.00th=[ 5873], 80.00th=[ 6074], 90.00th=[ 6208], 95.00th=[ 6275], 00:22:18.364 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:22:18.364 | 99.99th=[ 6342] 00:22:18.364 bw ( KiB/s): min= 2015, max=100352, per=1.24%, avg=41363.00, stdev=47672.88, samples=5 00:22:18.364 iops : min= 1, max= 98, avg=40.20, stdev=46.76, samples=5 00:22:18.364 lat (msec) : >=2000=100.00% 00:22:18.364 cpu : usr=0.00%, sys=0.82%, ctx=558, majf=0, minf=32769 00:22:18.364 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.4% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:22:18.364 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job1: (groupid=0, jobs=1): err= 0: pid=988253: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=57, BW=57.4MiB/s (60.2MB/s)(585MiB/10185msec) 00:22:18.364 slat (usec): min=56, max=2081.6k, avg=17199.53, stdev=131893.32 00:22:18.364 clat (msec): min=118, max=5077, avg=1965.48, stdev=1505.09 00:22:18.364 lat (msec): min=774, max=5080, avg=1982.68, stdev=1505.00 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 776], 5.00th=[ 785], 10.00th=[ 785], 20.00th=[ 844], 00:22:18.364 | 30.00th=[ 1045], 40.00th=[ 1150], 50.00th=[ 1200], 60.00th=[ 1267], 00:22:18.364 | 70.00th=[ 2299], 80.00th=[ 4396], 90.00th=[ 4732], 95.00th=[ 4866], 00:22:18.364 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:22:18.364 | 99.99th=[ 5067] 00:22:18.364 bw ( KiB/s): min= 2048, max=169984, per=3.11%, avg=103992.89, stdev=54722.03, samples=9 00:22:18.364 iops : min= 2, max= 166, avg=101.56, stdev=53.44, samples=9 00:22:18.364 lat (msec) : 250=0.17%, 1000=27.69%, 2000=41.20%, >=2000=30.94% 00:22:18.364 cpu : usr=0.05%, sys=1.56%, ctx=793, majf=0, minf=32185 00:22:18.364 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.364 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job1: (groupid=0, jobs=1): err= 0: pid=988254: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=82, BW=82.1MiB/s (86.1MB/s)(995MiB/12113msec) 00:22:18.364 slat (usec): min=34, max=2090.7k, avg=10072.21, stdev=100118.49 00:22:18.364 clat (msec): min=380, max=8547, avg=1136.25, stdev=1380.16 00:22:18.364 lat (msec): min=381, max=8566, avg=1146.32, stdev=1389.32 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 380], 5.00th=[ 384], 10.00th=[ 384], 20.00th=[ 388], 00:22:18.364 | 30.00th=[ 409], 40.00th=[ 481], 50.00th=[ 584], 60.00th=[ 693], 00:22:18.364 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 4279], 95.00th=[ 4463], 00:22:18.364 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 8557], 99.95th=[ 8557], 00:22:18.364 | 99.99th=[ 8557] 00:22:18.364 bw ( KiB/s): min= 2031, max=335201, per=4.84%, avg=161572.36, stdev=121921.02, samples=11 00:22:18.364 iops : min= 1, max= 327, avg=157.64, stdev=119.14, samples=11 00:22:18.364 lat (msec) : 500=42.21%, 750=22.81%, 1000=19.10%, >=2000=15.88% 00:22:18.364 cpu : usr=0.04%, sys=1.07%, ctx=938, majf=0, minf=32769 00:22:18.364 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:22:18.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.364 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.364 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.364 job1: (groupid=0, jobs=1): err= 0: pid=988255: Fri Jun 7 23:13:08 2024 00:22:18.364 read: IOPS=21, BW=21.4MiB/s (22.5MB/s)(217MiB/10133msec) 00:22:18.364 slat (usec): min=751, max=2148.5k, avg=46128.97, stdev=259443.34 00:22:18.364 clat (msec): min=121, max=8025, avg=5020.11, stdev=2801.49 00:22:18.364 lat (msec): min=1278, max=8031, avg=5066.24, stdev=2778.01 00:22:18.364 clat percentiles (msec): 00:22:18.364 | 1.00th=[ 1267], 5.00th=[ 1301], 10.00th=[ 1318], 20.00th=[ 1351], 00:22:18.364 | 30.00th=[ 2534], 40.00th=[ 2635], 50.00th=[ 6946], 60.00th=[ 7148], 00:22:18.364 | 70.00th=[ 7349], 80.00th=[ 7617], 90.00th=[ 7819], 95.00th=[ 7953], 00:22:18.365 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:22:18.365 | 99.99th=[ 8020] 00:22:18.365 bw ( KiB/s): min= 2048, max=98304, per=0.91%, avg=30378.67, stdev=41899.83, samples=6 00:22:18.365 iops : min= 2, max= 96, avg=29.67, stdev=40.92, samples=6 00:22:18.365 lat (msec) : 250=0.46%, 2000=26.73%, >=2000=72.81% 00:22:18.365 cpu : usr=0.03%, sys=1.10%, ctx=425, majf=0, minf=32769 00:22:18.365 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.4%, 32=14.7%, >=64=71.0% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:22:18.365 issued rwts: total=217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988257: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=2, BW=2121KiB/s (2172kB/s)(25.0MiB/12072msec) 00:22:18.365 slat (msec): min=8, max=2106, avg=400.04, stdev=788.45 00:22:18.365 clat (msec): min=2070, max=12011, avg=6559.09, stdev=3347.30 00:22:18.365 lat (msec): min=2079, max=12071, avg=6959.13, stdev=3385.92 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 2089], 20.00th=[ 2140], 00:22:18.365 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 6477], 00:22:18.365 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[12013], 95.00th=[12013], 00:22:18.365 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:18.365 | 99.99th=[12013] 00:22:18.365 lat (msec) : >=2000=100.00% 00:22:18.365 cpu : usr=0.01%, sys=0.11%, ctx=73, majf=0, minf=6401 00:22:18.365 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.365 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988258: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=2, BW=2203KiB/s (2256kB/s)(26.0MiB/12087msec) 00:22:18.365 slat (msec): min=2, max=2110, avg=384.62, stdev=773.68 00:22:18.365 clat (msec): min=2085, max=12014, avg=7603.63, stdev=3784.33 00:22:18.365 lat (msec): min=2091, max=12086, avg=7988.25, stdev=3708.51 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 4212], 00:22:18.365 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8658], 00:22:18.365 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:18.365 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:18.365 | 99.99th=[12013] 00:22:18.365 lat (msec) : >=2000=100.00% 00:22:18.365 cpu : usr=0.00%, sys=0.12%, ctx=85, majf=0, minf=6657 00:22:18.365 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.365 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988259: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=12, BW=12.5MiB/s (13.1MB/s)(152MiB/12179msec) 00:22:18.365 slat (usec): min=598, max=2069.9k, avg=66103.90, stdev=327060.20 00:22:18.365 clat (msec): min=1459, max=12113, avg=9614.77, stdev=3163.66 00:22:18.365 lat (msec): min=1460, max=12123, avg=9680.88, stdev=3110.02 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 1452], 5.00th=[ 1586], 10.00th=[ 3675], 20.00th=[ 7953], 00:22:18.365 | 30.00th=[10671], 40.00th=[10805], 50.00th=[10939], 60.00th=[11208], 00:22:18.365 | 70.00th=[11342], 80.00th=[11610], 90.00th=[11879], 95.00th=[12013], 00:22:18.365 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.365 | 99.99th=[12147] 00:22:18.365 bw ( KiB/s): min= 1831, max=14336, per=0.25%, avg=8497.17, stdev=4804.19, samples=6 00:22:18.365 iops : min= 1, max= 14, avg= 8.17, stdev= 4.92, samples=6 00:22:18.365 lat (msec) : 2000=6.58%, >=2000=93.42% 00:22:18.365 cpu : usr=0.01%, sys=0.87%, ctx=417, majf=0, minf=32769 00:22:18.365 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.5%, 32=21.1%, >=64=58.6% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=96.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.8% 00:22:18.365 issued rwts: total=152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988260: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=4, BW=4418KiB/s (4524kB/s)(52.0MiB/12052msec) 00:22:18.365 slat (usec): min=429, max=2093.1k, avg=192333.50, stdev=566718.43 00:22:18.365 clat (msec): min=2049, max=12048, avg=9219.63, stdev=3372.70 00:22:18.365 lat (msec): min=2058, max=12051, avg=9411.96, stdev=3238.30 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2056], 5.00th=[ 2072], 10.00th=[ 2106], 20.00th=[ 6342], 00:22:18.365 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:22:18.365 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:18.365 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:18.365 | 99.99th=[12013] 00:22:18.365 lat (msec) : >=2000=100.00% 00:22:18.365 cpu : usr=0.00%, sys=0.24%, ctx=97, majf=0, minf=13313 00:22:18.365 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.365 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988261: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=3, BW=3963KiB/s (4059kB/s)(47.0MiB/12143msec) 00:22:18.365 slat (usec): min=721, max=2073.3k, avg=213505.19, stdev=599791.24 00:22:18.365 clat (msec): min=2108, max=12140, avg=9794.03, stdev=3209.00 00:22:18.365 lat (msec): min=4181, max=12142, avg=10007.53, stdev=3014.44 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:22:18.365 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12013], 00:22:18.365 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:18.365 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.365 | 99.99th=[12147] 00:22:18.365 lat (msec) : >=2000=100.00% 00:22:18.365 cpu : usr=0.00%, sys=0.41%, ctx=96, majf=0, minf=12033 00:22:18.365 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.365 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988262: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=12, BW=12.1MiB/s (12.7MB/s)(123MiB/10172msec) 00:22:18.365 slat (usec): min=899, max=2126.9k, avg=81698.10, stdev=370611.07 00:22:18.365 clat (msec): min=121, max=10167, avg=9075.43, stdev=1723.92 00:22:18.365 lat (msec): min=2187, max=10171, avg=9157.13, stdev=1522.47 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2198], 5.00th=[ 4396], 10.00th=[ 8658], 20.00th=[ 8926], 00:22:18.365 | 30.00th=[ 9194], 40.00th=[ 9329], 50.00th=[ 9597], 60.00th=[ 9731], 00:22:18.365 | 70.00th=[ 9866], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:22:18.365 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:18.365 | 99.99th=[10134] 00:22:18.365 lat (msec) : 250=0.81%, >=2000=99.19% 00:22:18.365 cpu : usr=0.02%, sys=0.92%, ctx=374, majf=0, minf=31489 00:22:18.365 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.5%, 16=13.0%, 32=26.0%, >=64=48.8% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:18.365 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988263: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=2, BW=2199KiB/s (2252kB/s)(26.0MiB/12107msec) 00:22:18.365 slat (msec): min=7, max=2102, avg=384.82, stdev=774.80 00:22:18.365 clat (msec): min=2100, max=12093, avg=9620.58, stdev=2930.51 00:22:18.365 lat (msec): min=4203, max=12106, avg=10005.40, stdev=2533.60 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:22:18.365 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:22:18.365 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:18.365 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.365 | 99.99th=[12147] 00:22:18.365 lat (msec) : >=2000=100.00% 00:22:18.365 cpu : usr=0.00%, sys=0.18%, ctx=76, majf=0, minf=6657 00:22:18.365 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.365 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.365 job1: (groupid=0, jobs=1): err= 0: pid=988264: Fri Jun 7 23:13:08 2024 00:22:18.365 read: IOPS=10, BW=10.8MiB/s (11.4MB/s)(110MiB/10141msec) 00:22:18.365 slat (usec): min=539, max=2110.4k, avg=91126.06, stdev=378485.61 00:22:18.365 clat (msec): min=116, max=10139, avg=7185.05, stdev=2024.82 00:22:18.365 lat (msec): min=2188, max=10140, avg=7276.18, stdev=1926.98 00:22:18.365 clat percentiles (msec): 00:22:18.365 | 1.00th=[ 2198], 5.00th=[ 4396], 10.00th=[ 5873], 20.00th=[ 6007], 00:22:18.365 | 30.00th=[ 6141], 40.00th=[ 6208], 50.00th=[ 6342], 60.00th=[ 6409], 00:22:18.365 | 70.00th=[ 8658], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:22:18.365 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:18.365 | 99.99th=[10134] 00:22:18.365 lat (msec) : 250=0.91%, >=2000=99.09% 00:22:18.365 cpu : usr=0.00%, sys=0.73%, ctx=187, majf=0, minf=28161 00:22:18.365 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:22:18.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.365 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:18.366 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job1: (groupid=0, jobs=1): err= 0: pid=988265: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=32, BW=32.6MiB/s (34.1MB/s)(396MiB/12160msec) 00:22:18.366 slat (usec): min=45, max=2079.7k, avg=25381.52, stdev=206901.73 00:22:18.366 clat (msec): min=389, max=11065, avg=3795.60, stdev=4705.57 00:22:18.366 lat (msec): min=393, max=11069, avg=3820.99, stdev=4717.10 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 401], 00:22:18.366 | 30.00th=[ 418], 40.00th=[ 464], 50.00th=[ 498], 60.00th=[ 642], 00:22:18.366 | 70.00th=[ 6879], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:22:18.366 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:22:18.366 | 99.99th=[11073] 00:22:18.366 bw ( KiB/s): min= 1892, max=286720, per=2.36%, avg=78677.00, stdev=123673.86, samples=7 00:22:18.366 iops : min= 1, max= 280, avg=76.57, stdev=120.96, samples=7 00:22:18.366 lat (msec) : 500=50.76%, 750=13.89%, >=2000=35.35% 00:22:18.366 cpu : usr=0.02%, sys=1.09%, ctx=367, majf=0, minf=32769 00:22:18.366 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:18.366 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job1: (groupid=0, jobs=1): err= 0: pid=988266: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(210MiB/10092msec) 00:22:18.366 slat (usec): min=85, max=2087.6k, avg=47666.36, stdev=278739.92 00:22:18.366 clat (msec): min=80, max=9234, avg=5683.98, stdev=3459.92 00:22:18.366 lat (msec): min=103, max=9249, avg=5731.64, stdev=3445.10 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 104], 5.00th=[ 105], 10.00th=[ 142], 20.00th=[ 1150], 00:22:18.366 | 30.00th=[ 3171], 40.00th=[ 5134], 50.00th=[ 6544], 60.00th=[ 8658], 00:22:18.366 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9194], 95.00th=[ 9194], 00:22:18.366 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:22:18.366 | 99.99th=[ 9194] 00:22:18.366 bw ( KiB/s): min=22528, max=55296, per=1.02%, avg=33925.20, stdev=15165.46, samples=5 00:22:18.366 iops : min= 22, max= 54, avg=33.00, stdev=14.70, samples=5 00:22:18.366 lat (msec) : 100=0.48%, 250=10.00%, 2000=12.86%, >=2000=76.67% 00:22:18.366 cpu : usr=0.03%, sys=0.91%, ctx=397, majf=0, minf=32769 00:22:18.366 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.0% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:22:18.366 issued rwts: total=210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job2: (groupid=0, jobs=1): err= 0: pid=988275: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=4, BW=4572KiB/s (4682kB/s)(45.0MiB/10079msec) 00:22:18.366 slat (usec): min=658, max=2140.7k, avg=222240.38, stdev=616133.33 00:22:18.366 clat (msec): min=77, max=10070, avg=3114.21, stdev=3160.60 00:22:18.366 lat (msec): min=81, max=10078, avg=3336.45, stdev=3291.14 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 92], 20.00th=[ 106], 00:22:18.366 | 30.00th=[ 2072], 40.00th=[ 2198], 50.00th=[ 2198], 60.00th=[ 2198], 00:22:18.366 | 70.00th=[ 2198], 80.00th=[ 6477], 90.00th=[10000], 95.00th=[10000], 00:22:18.366 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:18.366 | 99.99th=[10134] 00:22:18.366 lat (msec) : 100=15.56%, 250=8.89%, >=2000=75.56% 00:22:18.366 cpu : usr=0.00%, sys=0.31%, ctx=93, majf=0, minf=11521 00:22:18.366 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.366 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job2: (groupid=0, jobs=1): err= 0: pid=988276: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=101, BW=101MiB/s (106MB/s)(1229MiB/12141msec) 00:22:18.366 slat (usec): min=46, max=2113.6k, avg=8166.76, stdev=118484.00 00:22:18.366 clat (msec): min=123, max=10858, avg=1232.68, stdev=3191.35 00:22:18.366 lat (msec): min=124, max=10859, avg=1240.85, stdev=3202.70 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 125], 5.00th=[ 126], 10.00th=[ 126], 20.00th=[ 127], 00:22:18.366 | 30.00th=[ 127], 40.00th=[ 127], 50.00th=[ 127], 60.00th=[ 128], 00:22:18.366 | 70.00th=[ 128], 80.00th=[ 129], 90.00th=[ 8423], 95.00th=[10805], 00:22:18.366 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:22:18.366 | 99.99th=[10805] 00:22:18.366 bw ( KiB/s): min= 1957, max=1032192, per=8.45%, avg=282198.00, stdev=421149.91, samples=8 00:22:18.366 iops : min= 1, max= 1008, avg=275.38, stdev=411.34, samples=8 00:22:18.366 lat (msec) : 250=86.82%, 500=2.03%, >=2000=11.15% 00:22:18.366 cpu : usr=0.03%, sys=1.25%, ctx=1161, majf=0, minf=32769 00:22:18.366 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.366 issued rwts: total=1229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job2: (groupid=0, jobs=1): err= 0: pid=988277: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=2, BW=2294KiB/s (2349kB/s)(27.0MiB/12054msec) 00:22:18.366 slat (usec): min=538, max=2094.0k, avg=445001.06, stdev=824647.78 00:22:18.366 clat (msec): min=38, max=12026, avg=5575.71, stdev=3316.21 00:22:18.366 lat (msec): min=2061, max=12053, avg=6020.71, stdev=3350.52 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 39], 5.00th=[ 2056], 10.00th=[ 2072], 20.00th=[ 2106], 00:22:18.366 | 30.00th=[ 4212], 40.00th=[ 4245], 50.00th=[ 4329], 60.00th=[ 6409], 00:22:18.366 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[10805], 00:22:18.366 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:18.366 | 99.99th=[12013] 00:22:18.366 lat (msec) : 50=3.70%, >=2000=96.30% 00:22:18.366 cpu : usr=0.01%, sys=0.15%, ctx=82, majf=0, minf=6913 00:22:18.366 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.366 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job2: (groupid=0, jobs=1): err= 0: pid=988278: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=90, BW=90.8MiB/s (95.2MB/s)(915MiB/10073msec) 00:22:18.366 slat (usec): min=47, max=2042.2k, avg=10932.64, stdev=104642.16 00:22:18.366 clat (msec): min=66, max=6572, avg=1352.01, stdev=1725.65 00:22:18.366 lat (msec): min=78, max=6576, avg=1362.95, stdev=1735.76 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 243], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 245], 00:22:18.366 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 401], 60.00th=[ 535], 00:22:18.366 | 70.00th=[ 1167], 80.00th=[ 2433], 90.00th=[ 5134], 95.00th=[ 5336], 00:22:18.366 | 99.00th=[ 5537], 99.50th=[ 5604], 99.90th=[ 6544], 99.95th=[ 6544], 00:22:18.366 | 99.99th=[ 6544] 00:22:18.366 bw ( KiB/s): min= 2048, max=506890, per=4.02%, avg=134400.83, stdev=168977.59, samples=12 00:22:18.366 iops : min= 2, max= 495, avg=131.25, stdev=165.02, samples=12 00:22:18.366 lat (msec) : 100=0.33%, 250=46.34%, 500=4.70%, 750=13.44%, 1000=3.50% 00:22:18.366 lat (msec) : 2000=3.17%, >=2000=28.52% 00:22:18.366 cpu : usr=0.03%, sys=1.19%, ctx=1165, majf=0, minf=32769 00:22:18.366 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.366 issued rwts: total=915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job2: (groupid=0, jobs=1): err= 0: pid=988279: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=77, BW=77.7MiB/s (81.4MB/s)(790MiB/10171msec) 00:22:18.366 slat (usec): min=37, max=2033.7k, avg=12719.17, stdev=94252.88 00:22:18.366 clat (msec): min=119, max=4047, avg=1438.80, stdev=1240.48 00:22:18.366 lat (msec): min=379, max=4048, avg=1451.52, stdev=1243.37 00:22:18.366 clat percentiles (msec): 00:22:18.366 | 1.00th=[ 380], 5.00th=[ 388], 10.00th=[ 409], 20.00th=[ 502], 00:22:18.366 | 30.00th=[ 558], 40.00th=[ 651], 50.00th=[ 869], 60.00th=[ 911], 00:22:18.366 | 70.00th=[ 1636], 80.00th=[ 3071], 90.00th=[ 3742], 95.00th=[ 3910], 00:22:18.366 | 99.00th=[ 4044], 99.50th=[ 4044], 99.90th=[ 4044], 99.95th=[ 4044], 00:22:18.366 | 99.99th=[ 4044] 00:22:18.366 bw ( KiB/s): min= 4096, max=280576, per=3.69%, avg=123241.91, stdev=101075.08, samples=11 00:22:18.366 iops : min= 4, max= 274, avg=120.27, stdev=98.76, samples=11 00:22:18.366 lat (msec) : 250=0.13%, 500=19.62%, 750=22.91%, 1000=20.51%, 2000=11.39% 00:22:18.366 lat (msec) : >=2000=25.44% 00:22:18.366 cpu : usr=0.04%, sys=1.52%, ctx=1296, majf=0, minf=32769 00:22:18.366 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:22:18.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.366 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.366 issued rwts: total=790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.366 job2: (groupid=0, jobs=1): err= 0: pid=988280: Fri Jun 7 23:13:08 2024 00:22:18.366 read: IOPS=1, BW=1357KiB/s (1390kB/s)(16.0MiB/12073msec) 00:22:18.366 slat (msec): min=8, max=2184, avg=625.10, stdev=946.45 00:22:18.366 clat (msec): min=2070, max=12052, avg=5990.96, stdev=3965.41 00:22:18.366 lat (msec): min=2082, max=12072, avg=6616.06, stdev=4092.46 00:22:18.366 clat percentiles (msec): 00:22:18.367 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 2089], 20.00th=[ 2123], 00:22:18.367 | 30.00th=[ 2123], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6409], 00:22:18.367 | 70.00th=[ 8557], 80.00th=[10805], 90.00th=[12013], 95.00th=[12013], 00:22:18.367 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:18.367 | 99.99th=[12013] 00:22:18.367 lat (msec) : >=2000=100.00% 00:22:18.367 cpu : usr=0.00%, sys=0.09%, ctx=78, majf=0, minf=4097 00:22:18.367 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:18.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.367 job2: (groupid=0, jobs=1): err= 0: pid=988281: Fri Jun 7 23:13:08 2024 00:22:18.367 read: IOPS=15, BW=15.4MiB/s (16.2MB/s)(156MiB/10126msec) 00:22:18.367 slat (usec): min=362, max=2058.8k, avg=64247.80, stdev=307349.13 00:22:18.367 clat (msec): min=102, max=9887, avg=5196.92, stdev=2029.67 00:22:18.367 lat (msec): min=2161, max=9925, avg=5261.17, stdev=2024.95 00:22:18.367 clat percentiles (msec): 00:22:18.367 | 1.00th=[ 2165], 5.00th=[ 3440], 10.00th=[ 3507], 20.00th=[ 3675], 00:22:18.367 | 30.00th=[ 3809], 40.00th=[ 3977], 50.00th=[ 4144], 60.00th=[ 4329], 00:22:18.367 | 70.00th=[ 6477], 80.00th=[ 6611], 90.00th=[ 8658], 95.00th=[ 8658], 00:22:18.367 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:22:18.367 | 99.99th=[ 9866] 00:22:18.367 bw ( KiB/s): min= 4096, max=44966, per=0.57%, avg=19084.67, stdev=22507.26, samples=3 00:22:18.367 iops : min= 4, max= 43, avg=18.33, stdev=21.46, samples=3 00:22:18.367 lat (msec) : 250=0.64%, >=2000=99.36% 00:22:18.367 cpu : usr=0.00%, sys=0.84%, ctx=326, majf=0, minf=32769 00:22:18.367 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.1%, 16=10.3%, 32=20.5%, >=64=59.6% 00:22:18.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 complete : 0=0.0%, 4=96.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.3% 00:22:18.367 issued rwts: total=156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.367 job2: (groupid=0, jobs=1): err= 0: pid=988282: Fri Jun 7 23:13:08 2024 00:22:18.367 read: IOPS=6, BW=6504KiB/s (6660kB/s)(64.0MiB/10077msec) 00:22:18.367 slat (usec): min=808, max=2074.3k, avg=156405.35, stdev=507668.32 00:22:18.367 clat (msec): min=66, max=10069, avg=6584.73, stdev=3436.95 00:22:18.367 lat (msec): min=79, max=10076, avg=6741.14, stdev=3362.59 00:22:18.367 clat percentiles (msec): 00:22:18.367 | 1.00th=[ 67], 5.00th=[ 109], 10.00th=[ 2198], 20.00th=[ 2299], 00:22:18.367 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 6544], 60.00th=[ 8658], 00:22:18.367 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10000], 95.00th=[10000], 00:22:18.367 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:18.367 | 99.99th=[10134] 00:22:18.367 lat (msec) : 100=4.69%, 250=4.69%, >=2000=90.62% 00:22:18.367 cpu : usr=0.00%, sys=0.36%, ctx=170, majf=0, minf=16385 00:22:18.367 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:18.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.367 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.367 job2: (groupid=0, jobs=1): err= 0: pid=988283: Fri Jun 7 23:13:08 2024 00:22:18.367 read: IOPS=73, BW=73.2MiB/s (76.8MB/s)(737MiB/10063msec) 00:22:18.367 slat (usec): min=37, max=2091.2k, avg=13585.74, stdev=113633.34 00:22:18.367 clat (msec): min=47, max=5977, avg=1308.18, stdev=1792.40 00:22:18.367 lat (msec): min=68, max=6028, avg=1321.76, stdev=1805.42 00:22:18.367 clat percentiles (msec): 00:22:18.367 | 1.00th=[ 79], 5.00th=[ 251], 10.00th=[ 251], 20.00th=[ 255], 00:22:18.367 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 266], 60.00th=[ 321], 00:22:18.367 | 70.00th=[ 1062], 80.00th=[ 2299], 90.00th=[ 4732], 95.00th=[ 5470], 00:22:18.367 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:22:18.367 | 99.99th=[ 6007] 00:22:18.367 bw ( KiB/s): min=12288, max=496609, per=5.34%, avg=178317.86, stdev=211619.80, samples=7 00:22:18.367 iops : min= 12, max= 484, avg=174.00, stdev=206.42, samples=7 00:22:18.367 lat (msec) : 50=0.14%, 100=1.22%, 250=3.26%, 500=57.39%, 1000=3.53% 00:22:18.367 lat (msec) : 2000=13.43%, >=2000=21.03% 00:22:18.367 cpu : usr=0.01%, sys=1.12%, ctx=1137, majf=0, minf=32769 00:22:18.367 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:22:18.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.367 issued rwts: total=737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.367 job2: (groupid=0, jobs=1): err= 0: pid=988284: Fri Jun 7 23:13:08 2024 00:22:18.367 read: IOPS=182, BW=183MiB/s (191MB/s)(1853MiB/10153msec) 00:22:18.367 slat (usec): min=45, max=2035.0k, avg=5403.49, stdev=47729.86 00:22:18.367 clat (msec): min=131, max=2755, avg=676.51, stdev=512.62 00:22:18.367 lat (msec): min=383, max=2757, avg=681.91, stdev=513.91 00:22:18.367 clat percentiles (msec): 00:22:18.367 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 414], 00:22:18.367 | 30.00th=[ 481], 40.00th=[ 523], 50.00th=[ 531], 60.00th=[ 550], 00:22:18.367 | 70.00th=[ 600], 80.00th=[ 693], 90.00th=[ 835], 95.00th=[ 2366], 00:22:18.367 | 99.00th=[ 2702], 99.50th=[ 2735], 99.90th=[ 2769], 99.95th=[ 2769], 00:22:18.367 | 99.99th=[ 2769] 00:22:18.367 bw ( KiB/s): min=112640, max=331776, per=7.05%, avg=235520.60, stdev=59116.37, samples=15 00:22:18.367 iops : min= 110, max= 324, avg=229.93, stdev=57.74, samples=15 00:22:18.367 lat (msec) : 250=0.05%, 500=31.95%, 750=54.61%, 1000=6.53%, >=2000=6.85% 00:22:18.367 cpu : usr=0.13%, sys=2.57%, ctx=1767, majf=0, minf=32769 00:22:18.367 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:22:18.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.367 issued rwts: total=1853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.367 job2: (groupid=0, jobs=1): err= 0: pid=988285: Fri Jun 7 23:13:08 2024 00:22:18.367 read: IOPS=5, BW=5718KiB/s (5856kB/s)(68.0MiB/12177msec) 00:22:18.367 slat (usec): min=630, max=2120.4k, avg=148264.73, stdev=513805.42 00:22:18.367 clat (msec): min=2094, max=12175, avg=11053.96, stdev=2465.51 00:22:18.367 lat (msec): min=4176, max=12176, avg=11202.23, stdev=2208.42 00:22:18.367 clat percentiles (msec): 00:22:18.367 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 6409], 20.00th=[10671], 00:22:18.367 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:22:18.367 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:18.367 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.367 | 99.99th=[12147] 00:22:18.367 lat (msec) : >=2000=100.00% 00:22:18.367 cpu : usr=0.00%, sys=0.53%, ctx=109, majf=0, minf=17409 00:22:18.367 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:22:18.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.367 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:18.367 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.367 job2: (groupid=0, jobs=1): err= 0: pid=988287: Fri Jun 7 23:13:08 2024 00:22:18.367 read: IOPS=227, BW=227MiB/s (238MB/s)(2304MiB/10132msec) 00:22:18.369 slat (usec): min=37, max=2012.2k, avg=4352.52, stdev=64665.27 00:22:18.369 clat (msec): min=94, max=4608, avg=509.88, stdev=1026.88 00:22:18.369 lat (msec): min=123, max=4608, avg=514.24, stdev=1030.98 00:22:18.369 clat percentiles (msec): 00:22:18.369 | 1.00th=[ 125], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 126], 00:22:18.369 | 30.00th=[ 127], 40.00th=[ 127], 50.00th=[ 127], 60.00th=[ 128], 00:22:18.369 | 70.00th=[ 128], 80.00th=[ 409], 90.00th=[ 1250], 95.00th=[ 4212], 00:22:18.369 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:22:18.369 | 99.99th=[ 4597] 00:22:18.369 bw ( KiB/s): min= 2048, max=1032192, per=11.12%, avg=371370.67, stdev=438208.77, samples=12 00:22:18.369 iops : min= 2, max= 1008, avg=362.67, stdev=427.94, samples=12 00:22:18.369 lat (msec) : 100=0.04%, 250=79.77%, 500=0.26%, 750=0.04%, 1000=7.77% 00:22:18.369 lat (msec) : 2000=5.21%, >=2000=6.90% 00:22:18.369 cpu : usr=0.08%, sys=2.72%, ctx=2220, majf=0, minf=32769 00:22:18.369 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:22:18.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.369 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.369 job2: (groupid=0, jobs=1): err= 0: pid=988288: Fri Jun 7 23:13:08 2024 00:22:18.369 read: IOPS=32, BW=32.0MiB/s (33.6MB/s)(385MiB/12027msec) 00:22:18.369 slat (usec): min=59, max=2090.3k, avg=25990.03, stdev=164979.13 00:22:18.369 clat (msec): min=919, max=8501, avg=3188.40, stdev=2488.34 00:22:18.369 lat (msec): min=938, max=8518, avg=3214.39, stdev=2494.37 00:22:18.369 clat percentiles (msec): 00:22:18.369 | 1.00th=[ 944], 5.00th=[ 953], 10.00th=[ 978], 20.00th=[ 1116], 00:22:18.369 | 30.00th=[ 1334], 40.00th=[ 1620], 50.00th=[ 1888], 60.00th=[ 2089], 00:22:18.369 | 70.00th=[ 5537], 80.00th=[ 6946], 90.00th=[ 7215], 95.00th=[ 7349], 00:22:18.369 | 99.00th=[ 7483], 99.50th=[ 8423], 99.90th=[ 8490], 99.95th=[ 8490], 00:22:18.369 | 99.99th=[ 8490] 00:22:18.369 bw ( KiB/s): min= 6144, max=145408, per=1.98%, avg=66023.38, stdev=54595.23, samples=8 00:22:18.369 iops : min= 6, max= 142, avg=64.38, stdev=53.25, samples=8 00:22:18.369 lat (msec) : 1000=13.51%, 2000=41.56%, >=2000=44.94% 00:22:18.369 cpu : usr=0.00%, sys=0.81%, ctx=920, majf=0, minf=32769 00:22:18.369 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:22:18.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.369 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:18.369 issued rwts: total=385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.369 job3: (groupid=0, jobs=1): err= 0: pid=988293: Fri Jun 7 23:13:08 2024 00:22:18.369 read: IOPS=80, BW=80.8MiB/s (84.7MB/s)(812MiB/10053msec) 00:22:18.369 slat (usec): min=40, max=2066.0k, avg=12337.74, stdev=84527.97 00:22:18.369 clat (msec): min=29, max=4517, avg=1320.06, stdev=669.37 00:22:18.369 lat (msec): min=660, max=4522, avg=1332.40, stdev=670.08 00:22:18.369 clat percentiles (msec): 00:22:18.369 | 1.00th=[ 676], 5.00th=[ 743], 10.00th=[ 776], 20.00th=[ 827], 00:22:18.369 | 30.00th=[ 894], 40.00th=[ 1003], 50.00th=[ 1062], 60.00th=[ 1183], 00:22:18.369 | 70.00th=[ 1318], 80.00th=[ 1586], 90.00th=[ 2366], 95.00th=[ 2903], 00:22:18.369 | 99.00th=[ 3239], 99.50th=[ 3239], 99.90th=[ 4530], 99.95th=[ 4530], 00:22:18.369 | 99.99th=[ 4530] 00:22:18.369 bw ( KiB/s): min=43008, max=188416, per=3.49%, avg=116695.67, stdev=45943.20, samples=12 00:22:18.369 iops : min= 42, max= 184, avg=113.83, stdev=44.86, samples=12 00:22:18.369 lat (msec) : 50=0.12%, 750=5.30%, 1000=34.61%, 2000=42.00%, >=2000=17.98% 00:22:18.369 cpu : usr=0.04%, sys=1.41%, ctx=1406, majf=0, minf=32769 00:22:18.369 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.2% 00:22:18.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.369 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.369 issued rwts: total=812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.369 job3: (groupid=0, jobs=1): err= 0: pid=988294: Fri Jun 7 23:13:08 2024 00:22:18.369 read: IOPS=107, BW=108MiB/s (113MB/s)(1090MiB/10093msec) 00:22:18.369 slat (usec): min=42, max=2098.6k, avg=9171.56, stdev=89771.11 00:22:18.369 clat (msec): min=89, max=4938, avg=1134.43, stdev=1314.38 00:22:18.369 lat (msec): min=93, max=4941, avg=1143.60, stdev=1319.13 00:22:18.369 clat percentiles (msec): 00:22:18.369 | 1.00th=[ 142], 5.00th=[ 330], 10.00th=[ 518], 20.00th=[ 527], 00:22:18.369 | 30.00th=[ 550], 40.00th=[ 609], 50.00th=[ 642], 60.00th=[ 659], 00:22:18.369 | 70.00th=[ 877], 80.00th=[ 953], 90.00th=[ 4732], 95.00th=[ 4866], 00:22:18.369 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:22:18.369 | 99.99th=[ 4933] 00:22:18.369 bw ( KiB/s): min=30720, max=249856, per=4.54%, avg=151709.54, stdev=83422.85, samples=13 00:22:18.369 iops : min= 30, max= 244, avg=148.15, stdev=81.47, samples=13 00:22:18.369 lat (msec) : 100=0.28%, 250=2.84%, 500=5.69%, 750=57.80%, 1000=16.24% 00:22:18.369 lat (msec) : 2000=4.13%, >=2000=13.03% 00:22:18.369 cpu : usr=0.03%, sys=1.86%, ctx=1257, majf=0, minf=32769 00:22:18.369 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:22:18.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.369 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.369 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.369 job3: (groupid=0, jobs=1): err= 0: pid=988295: Fri Jun 7 23:13:08 2024 00:22:18.369 read: IOPS=39, BW=39.0MiB/s (40.9MB/s)(397MiB/10177msec) 00:22:18.369 slat (usec): min=132, max=2051.0k, avg=25398.88, stdev=119953.60 00:22:18.369 clat (msec): min=90, max=5329, avg=2838.34, stdev=848.70 00:22:18.369 lat (msec): min=1389, max=5346, avg=2863.74, stdev=841.38 00:22:18.369 clat percentiles (msec): 00:22:18.369 | 1.00th=[ 1401], 5.00th=[ 1502], 10.00th=[ 1620], 20.00th=[ 2232], 00:22:18.369 | 30.00th=[ 2433], 40.00th=[ 2635], 50.00th=[ 2836], 60.00th=[ 3004], 00:22:18.369 | 70.00th=[ 3104], 80.00th=[ 3239], 90.00th=[ 3977], 95.00th=[ 4597], 00:22:18.369 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:22:18.369 | 99.99th=[ 5336] 00:22:18.369 bw ( KiB/s): min= 2048, max=81920, per=1.27%, avg=42357.92, stdev=22804.64, samples=13 00:22:18.369 iops : min= 2, max= 80, avg=41.23, stdev=22.13, samples=13 00:22:18.369 lat (msec) : 100=0.25%, 2000=13.35%, >=2000=86.40% 00:22:18.369 cpu : usr=0.01%, sys=1.44%, ctx=1439, majf=0, minf=32769 00:22:18.369 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:22:18.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.369 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:18.369 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.369 job3: (groupid=0, jobs=1): err= 0: pid=988296: Fri Jun 7 23:13:08 2024 00:22:18.369 read: IOPS=5, BW=5561KiB/s (5694kB/s)(55.0MiB/10128msec) 00:22:18.369 slat (usec): min=666, max=2079.8k, avg=181830.48, stdev=553736.73 00:22:18.369 clat (msec): min=126, max=10124, avg=7430.39, stdev=3133.53 00:22:18.369 lat (msec): min=2171, max=10127, avg=7612.22, stdev=2988.67 00:22:18.369 clat percentiles (msec): 00:22:18.369 | 1.00th=[ 127], 5.00th=[ 2198], 10.00th=[ 2198], 20.00th=[ 4329], 00:22:18.369 | 30.00th=[ 6409], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10000], 00:22:18.369 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:22:18.369 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:18.369 | 99.99th=[10134] 00:22:18.369 lat (msec) : 250=1.82%, >=2000=98.18% 00:22:18.369 cpu : usr=0.00%, sys=0.48%, ctx=114, majf=0, minf=14081 00:22:18.369 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:22:18.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.369 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.369 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.369 job3: (groupid=0, jobs=1): err= 0: pid=988297: Fri Jun 7 23:13:08 2024 00:22:18.369 read: IOPS=56, BW=57.0MiB/s (59.8MB/s)(573MiB/10053msec) 00:22:18.369 slat (usec): min=68, max=1873.5k, avg=17482.04, stdev=94990.01 00:22:18.369 clat (msec): min=32, max=3152, avg=1696.05, stdev=578.96 00:22:18.369 lat (msec): min=78, max=3158, avg=1713.53, stdev=576.82 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 125], 5.00th=[ 609], 10.00th=[ 659], 20.00th=[ 1435], 00:22:18.370 | 30.00th=[ 1552], 40.00th=[ 1670], 50.00th=[ 1720], 60.00th=[ 1821], 00:22:18.370 | 70.00th=[ 2005], 80.00th=[ 2123], 90.00th=[ 2500], 95.00th=[ 2567], 00:22:18.370 | 99.00th=[ 2601], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:22:18.370 | 99.99th=[ 3138] 00:22:18.370 bw ( KiB/s): min=22528, max=185996, per=2.45%, avg=81911.45, stdev=50168.34, samples=11 00:22:18.370 iops : min= 22, max= 181, avg=79.82, stdev=48.94, samples=11 00:22:18.370 lat (msec) : 50=0.17%, 100=0.35%, 250=0.52%, 750=10.47%, 1000=4.71% 00:22:18.370 lat (msec) : 2000=53.93%, >=2000=29.84% 00:22:18.370 cpu : usr=0.00%, sys=1.24%, ctx=1462, majf=0, minf=32769 00:22:18.370 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.370 issued rwts: total=573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988298: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=108, BW=109MiB/s (114MB/s)(1097MiB/10072msec) 00:22:18.370 slat (usec): min=40, max=2077.4k, avg=9110.85, stdev=95142.54 00:22:18.370 clat (msec): min=69, max=4722, avg=988.74, stdev=1268.67 00:22:18.370 lat (msec): min=88, max=4724, avg=997.85, stdev=1272.62 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 384], 5.00th=[ 397], 10.00th=[ 397], 20.00th=[ 397], 00:22:18.370 | 30.00th=[ 401], 40.00th=[ 405], 50.00th=[ 518], 60.00th=[ 642], 00:22:18.370 | 70.00th=[ 667], 80.00th=[ 693], 90.00th=[ 4396], 95.00th=[ 4597], 00:22:18.370 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:22:18.370 | 99.99th=[ 4732] 00:22:18.370 bw ( KiB/s): min=20480, max=327680, per=5.95%, avg=198656.00, stdev=115582.54, samples=10 00:22:18.370 iops : min= 20, max= 320, avg=194.00, stdev=112.87, samples=10 00:22:18.370 lat (msec) : 100=0.82%, 250=0.09%, 500=48.40%, 750=36.65%, 2000=0.64% 00:22:18.370 lat (msec) : >=2000=13.40% 00:22:18.370 cpu : usr=0.09%, sys=1.95%, ctx=971, majf=0, minf=32769 00:22:18.370 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.370 issued rwts: total=1097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988299: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=71, BW=71.9MiB/s (75.4MB/s)(722MiB/10040msec) 00:22:18.370 slat (usec): min=41, max=2047.3k, avg=13860.01, stdev=116055.69 00:22:18.370 clat (msec): min=30, max=4020, avg=1167.36, stdev=886.23 00:22:18.370 lat (msec): min=49, max=4034, avg=1181.22, stdev=891.90 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 51], 5.00th=[ 376], 10.00th=[ 409], 20.00th=[ 477], 00:22:18.370 | 30.00th=[ 558], 40.00th=[ 676], 50.00th=[ 802], 60.00th=[ 894], 00:22:18.370 | 70.00th=[ 1351], 80.00th=[ 2140], 90.00th=[ 2702], 95.00th=[ 2769], 00:22:18.370 | 99.00th=[ 3641], 99.50th=[ 3943], 99.90th=[ 4010], 99.95th=[ 4010], 00:22:18.370 | 99.99th=[ 4010] 00:22:18.370 bw ( KiB/s): min=16384, max=286720, per=4.05%, avg=135168.00, stdev=97645.80, samples=9 00:22:18.370 iops : min= 16, max= 280, avg=132.00, stdev=95.36, samples=9 00:22:18.370 lat (msec) : 50=0.55%, 100=0.69%, 500=22.44%, 750=24.79%, 1000=13.99% 00:22:18.370 lat (msec) : 2000=16.62%, >=2000=20.91% 00:22:18.370 cpu : usr=0.00%, sys=1.07%, ctx=1334, majf=0, minf=32769 00:22:18.370 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.370 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988300: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=80, BW=80.5MiB/s (84.4MB/s)(809MiB/10050msec) 00:22:18.370 slat (usec): min=37, max=1983.2k, avg=12379.54, stdev=74375.07 00:22:18.370 clat (msec): min=31, max=3753, avg=1501.17, stdev=807.36 00:22:18.370 lat (msec): min=438, max=3759, avg=1513.55, stdev=807.29 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 493], 5.00th=[ 667], 10.00th=[ 693], 20.00th=[ 852], 00:22:18.370 | 30.00th=[ 1070], 40.00th=[ 1167], 50.00th=[ 1267], 60.00th=[ 1385], 00:22:18.370 | 70.00th=[ 1620], 80.00th=[ 2056], 90.00th=[ 2970], 95.00th=[ 3272], 00:22:18.370 | 99.00th=[ 3675], 99.50th=[ 3742], 99.90th=[ 3742], 99.95th=[ 3742], 00:22:18.370 | 99.99th=[ 3742] 00:22:18.370 bw ( KiB/s): min= 4096, max=186368, per=2.61%, avg=87141.06, stdev=52269.95, samples=16 00:22:18.370 iops : min= 4, max= 182, avg=84.94, stdev=51.10, samples=16 00:22:18.370 lat (msec) : 50=0.12%, 500=1.85%, 750=14.96%, 1000=8.41%, 2000=53.89% 00:22:18.370 lat (msec) : >=2000=20.77% 00:22:18.370 cpu : usr=0.02%, sys=1.41%, ctx=1662, majf=0, minf=32769 00:22:18.370 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.370 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988301: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=7, BW=7333KiB/s (7509kB/s)(72.0MiB/10054msec) 00:22:18.370 slat (usec): min=525, max=2059.9k, avg=139168.22, stdev=459474.31 00:22:18.370 clat (msec): min=33, max=10005, avg=6796.83, stdev=2798.36 00:22:18.370 lat (msec): min=80, max=10053, avg=6936.00, stdev=2704.86 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 34], 5.00th=[ 111], 10.00th=[ 2165], 20.00th=[ 4329], 00:22:18.370 | 30.00th=[ 6544], 40.00th=[ 8154], 50.00th=[ 8288], 60.00th=[ 8356], 00:22:18.370 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10000], 00:22:18.370 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:22:18.370 | 99.99th=[10000] 00:22:18.370 lat (msec) : 50=1.39%, 100=2.78%, 250=1.39%, >=2000=94.44% 00:22:18.370 cpu : usr=0.00%, sys=0.43%, ctx=171, majf=0, minf=18433 00:22:18.370 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:18.370 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988302: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=74, BW=74.9MiB/s (78.5MB/s)(751MiB/10027msec) 00:22:18.370 slat (usec): min=45, max=2043.1k, avg=13309.98, stdev=125182.85 00:22:18.370 clat (msec): min=25, max=7989, avg=655.58, stdev=1138.72 00:22:18.370 lat (msec): min=26, max=8056, avg=668.89, stdev=1170.38 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 124], 5.00th=[ 220], 10.00th=[ 326], 20.00th=[ 372], 00:22:18.370 | 30.00th=[ 376], 40.00th=[ 380], 50.00th=[ 380], 60.00th=[ 384], 00:22:18.370 | 70.00th=[ 384], 80.00th=[ 388], 90.00th=[ 439], 95.00th=[ 4010], 00:22:18.370 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 8020], 99.95th=[ 8020], 00:22:18.370 | 99.99th=[ 8020] 00:22:18.370 bw ( KiB/s): min=306586, max=342016, per=9.86%, avg=329297.00, stdev=19715.51, samples=3 00:22:18.370 iops : min= 299, max= 334, avg=321.33, stdev=19.40, samples=3 00:22:18.370 lat (msec) : 50=0.53%, 100=0.13%, 250=5.99%, 500=86.42%, >=2000=6.92% 00:22:18.370 cpu : usr=0.03%, sys=1.39%, ctx=811, majf=0, minf=32769 00:22:18.370 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.370 issued rwts: total=751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988303: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=5, BW=5264KiB/s (5391kB/s)(52.0MiB/10115msec) 00:22:18.370 slat (usec): min=371, max=2088.0k, avg=192353.00, stdev=560860.18 00:22:18.370 clat (msec): min=111, max=10112, avg=5698.90, stdev=3823.59 00:22:18.370 lat (msec): min=120, max=10114, avg=5891.25, stdev=3788.45 00:22:18.370 clat percentiles (msec): 00:22:18.370 | 1.00th=[ 112], 5.00th=[ 131], 10.00th=[ 153], 20.00th=[ 2198], 00:22:18.370 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 4463], 60.00th=[ 8658], 00:22:18.370 | 70.00th=[ 8792], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:22:18.370 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:18.370 | 99.99th=[10134] 00:22:18.370 lat (msec) : 250=19.23%, >=2000=80.77% 00:22:18.370 cpu : usr=0.00%, sys=0.35%, ctx=116, majf=0, minf=13313 00:22:18.370 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:22:18.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.370 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:18.370 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.370 job3: (groupid=0, jobs=1): err= 0: pid=988304: Fri Jun 7 23:13:08 2024 00:22:18.370 read: IOPS=66, BW=66.5MiB/s (69.8MB/s)(669MiB/10057msec) 00:22:18.370 slat (usec): min=40, max=1980.4k, avg=14978.38, stdev=89694.11 00:22:18.370 clat (msec): min=32, max=2818, avg=1613.51, stdev=659.64 00:22:18.371 lat (msec): min=87, max=2824, avg=1628.49, stdev=658.69 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 592], 5.00th=[ 625], 10.00th=[ 693], 20.00th=[ 743], 00:22:18.371 | 30.00th=[ 1351], 40.00th=[ 1569], 50.00th=[ 1687], 60.00th=[ 1787], 00:22:18.371 | 70.00th=[ 1955], 80.00th=[ 2265], 90.00th=[ 2500], 95.00th=[ 2668], 00:22:18.371 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:22:18.371 | 99.99th=[ 2802] 00:22:18.371 bw ( KiB/s): min=47104, max=220742, per=3.00%, avg=100103.64, stdev=57017.84, samples=11 00:22:18.371 iops : min= 46, max= 215, avg=97.64, stdev=55.53, samples=11 00:22:18.371 lat (msec) : 50=0.15%, 100=0.15%, 250=0.30%, 750=19.43%, 1000=6.58% 00:22:18.371 lat (msec) : 2000=47.68%, >=2000=25.71% 00:22:18.371 cpu : usr=0.01%, sys=1.39%, ctx=1602, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.371 issued rwts: total=669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job3: (groupid=0, jobs=1): err= 0: pid=988305: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=45, BW=45.4MiB/s (47.6MB/s)(458MiB/10081msec) 00:22:18.371 slat (usec): min=32, max=1980.5k, avg=21837.49, stdev=108252.54 00:22:18.371 clat (msec): min=76, max=3775, avg=2246.59, stdev=652.96 00:22:18.371 lat (msec): min=84, max=3815, avg=2268.43, stdev=643.86 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 1469], 5.00th=[ 1502], 10.00th=[ 1552], 20.00th=[ 1737], 00:22:18.371 | 30.00th=[ 1854], 40.00th=[ 2005], 50.00th=[ 2056], 60.00th=[ 2265], 00:22:18.371 | 70.00th=[ 2400], 80.00th=[ 2668], 90.00th=[ 3406], 95.00th=[ 3608], 00:22:18.371 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:22:18.371 | 99.99th=[ 3775] 00:22:18.371 bw ( KiB/s): min= 8192, max=96256, per=1.69%, avg=56476.92, stdev=23961.30, samples=12 00:22:18.371 iops : min= 8, max= 94, avg=55.00, stdev=23.51, samples=12 00:22:18.371 lat (msec) : 100=0.66%, 250=0.22%, 2000=38.65%, >=2000=60.48% 00:22:18.371 cpu : usr=0.04%, sys=1.02%, ctx=1420, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.2% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:18.371 issued rwts: total=458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job4: (groupid=0, jobs=1): err= 0: pid=988319: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(190MiB/10101msec) 00:22:18.371 slat (usec): min=454, max=2100.2k, avg=52732.47, stdev=273113.92 00:22:18.371 clat (msec): min=80, max=9400, avg=2517.60, stdev=3121.00 00:22:18.371 lat (msec): min=121, max=9404, avg=2570.33, stdev=3157.46 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 122], 5.00th=[ 174], 10.00th=[ 326], 20.00th=[ 642], 00:22:18.371 | 30.00th=[ 844], 40.00th=[ 1028], 50.00th=[ 1217], 60.00th=[ 1401], 00:22:18.371 | 70.00th=[ 1653], 80.00th=[ 3943], 90.00th=[ 9194], 95.00th=[ 9329], 00:22:18.371 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:22:18.371 | 99.99th=[ 9463] 00:22:18.371 bw ( KiB/s): min=53248, max=73728, per=1.90%, avg=63488.00, stdev=14481.55, samples=2 00:22:18.371 iops : min= 52, max= 72, avg=62.00, stdev=14.14, samples=2 00:22:18.371 lat (msec) : 100=0.53%, 250=6.84%, 500=8.42%, 750=8.42%, 1000=13.68% 00:22:18.371 lat (msec) : 2000=41.58%, >=2000=20.53% 00:22:18.371 cpu : usr=0.00%, sys=0.73%, ctx=540, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:22:18.371 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job4: (groupid=0, jobs=1): err= 0: pid=988321: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=41, BW=41.9MiB/s (43.9MB/s)(424MiB/10121msec) 00:22:18.371 slat (usec): min=57, max=2120.5k, avg=23627.34, stdev=164816.74 00:22:18.371 clat (msec): min=101, max=6289, avg=1715.13, stdev=1476.69 00:22:18.371 lat (msec): min=131, max=6295, avg=1738.76, stdev=1501.20 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 159], 5.00th=[ 368], 10.00th=[ 542], 20.00th=[ 592], 00:22:18.371 | 30.00th=[ 625], 40.00th=[ 1099], 50.00th=[ 1586], 60.00th=[ 1720], 00:22:18.371 | 70.00th=[ 1821], 80.00th=[ 2198], 90.00th=[ 2601], 95.00th=[ 6208], 00:22:18.371 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:22:18.371 | 99.99th=[ 6275] 00:22:18.371 bw ( KiB/s): min=12288, max=135168, per=2.59%, avg=86543.71, stdev=43005.23, samples=7 00:22:18.371 iops : min= 12, max= 132, avg=84.29, stdev=41.92, samples=7 00:22:18.371 lat (msec) : 250=2.59%, 500=4.48%, 750=25.47%, 1000=5.66%, 2000=37.74% 00:22:18.371 lat (msec) : >=2000=24.06% 00:22:18.371 cpu : usr=0.00%, sys=1.19%, ctx=890, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.1% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:18.371 issued rwts: total=424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job4: (groupid=0, jobs=1): err= 0: pid=988322: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=48, BW=48.5MiB/s (50.8MB/s)(586MiB/12088msec) 00:22:18.371 slat (usec): min=42, max=2122.1k, avg=20558.39, stdev=145698.82 00:22:18.371 clat (msec): min=38, max=6509, avg=2506.36, stdev=1565.53 00:22:18.371 lat (msec): min=381, max=6547, avg=2526.92, stdev=1569.95 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 380], 5.00th=[ 384], 10.00th=[ 409], 20.00th=[ 667], 00:22:18.371 | 30.00th=[ 1301], 40.00th=[ 2333], 50.00th=[ 2869], 60.00th=[ 3205], 00:22:18.371 | 70.00th=[ 3540], 80.00th=[ 3708], 90.00th=[ 4212], 95.00th=[ 4279], 00:22:18.371 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:22:18.371 | 99.99th=[ 6477] 00:22:18.371 bw ( KiB/s): min= 8192, max=231424, per=2.34%, avg=78165.33, stdev=55833.54, samples=12 00:22:18.371 iops : min= 8, max= 226, avg=76.33, stdev=54.52, samples=12 00:22:18.371 lat (msec) : 50=0.17%, 500=14.85%, 750=8.19%, 1000=2.73%, 2000=13.14% 00:22:18.371 lat (msec) : >=2000=60.92% 00:22:18.371 cpu : usr=0.02%, sys=0.99%, ctx=1306, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.371 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job4: (groupid=0, jobs=1): err= 0: pid=988323: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=109, BW=109MiB/s (115MB/s)(1321MiB/12078msec) 00:22:18.371 slat (usec): min=40, max=1199.6k, avg=7577.88, stdev=46284.73 00:22:18.371 clat (msec): min=374, max=3886, avg=983.59, stdev=864.69 00:22:18.371 lat (msec): min=376, max=3895, avg=991.16, stdev=870.30 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 380], 5.00th=[ 384], 10.00th=[ 388], 20.00th=[ 401], 00:22:18.371 | 30.00th=[ 422], 40.00th=[ 464], 50.00th=[ 510], 60.00th=[ 625], 00:22:18.371 | 70.00th=[ 1099], 80.00th=[ 1838], 90.00th=[ 2198], 95.00th=[ 3037], 00:22:18.371 | 99.00th=[ 3809], 99.50th=[ 3842], 99.90th=[ 3876], 99.95th=[ 3876], 00:22:18.371 | 99.99th=[ 3876] 00:22:18.371 bw ( KiB/s): min= 1503, max=331776, per=4.88%, avg=162984.47, stdev=113650.82, samples=15 00:22:18.371 iops : min= 1, max= 324, avg=159.13, stdev=111.03, samples=15 00:22:18.371 lat (msec) : 500=47.92%, 750=16.65%, 1000=3.56%, 2000=20.59%, >=2000=11.28% 00:22:18.371 cpu : usr=0.15%, sys=1.49%, ctx=1820, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.371 issued rwts: total=1321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job4: (groupid=0, jobs=1): err= 0: pid=988324: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=79, BW=79.5MiB/s (83.3MB/s)(803MiB/10106msec) 00:22:18.371 slat (usec): min=46, max=2067.9k, avg=12451.15, stdev=105202.67 00:22:18.371 clat (msec): min=101, max=7489, avg=1544.53, stdev=1986.18 00:22:18.371 lat (msec): min=105, max=7492, avg=1556.98, stdev=1995.28 00:22:18.371 clat percentiles (msec): 00:22:18.371 | 1.00th=[ 138], 5.00th=[ 275], 10.00th=[ 388], 20.00th=[ 405], 00:22:18.371 | 30.00th=[ 456], 40.00th=[ 531], 50.00th=[ 625], 60.00th=[ 827], 00:22:18.371 | 70.00th=[ 1083], 80.00th=[ 1485], 90.00th=[ 6074], 95.00th=[ 6208], 00:22:18.371 | 99.00th=[ 6275], 99.50th=[ 7416], 99.90th=[ 7483], 99.95th=[ 7483], 00:22:18.371 | 99.99th=[ 7483] 00:22:18.371 bw ( KiB/s): min= 2048, max=272384, per=3.45%, avg=115350.50, stdev=89005.14, samples=12 00:22:18.371 iops : min= 2, max= 266, avg=112.58, stdev=86.92, samples=12 00:22:18.371 lat (msec) : 250=4.23%, 500=31.76%, 750=20.92%, 1000=10.46%, 2000=14.45% 00:22:18.371 lat (msec) : >=2000=18.18% 00:22:18.371 cpu : usr=0.09%, sys=1.76%, ctx=1004, majf=0, minf=32769 00:22:18.371 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:22:18.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.371 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.371 issued rwts: total=803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.371 job4: (groupid=0, jobs=1): err= 0: pid=988325: Fri Jun 7 23:13:08 2024 00:22:18.371 read: IOPS=127, BW=127MiB/s (133MB/s)(1281MiB/10084msec) 00:22:18.371 slat (usec): min=551, max=1234.9k, avg=7812.89, stdev=35802.21 00:22:18.372 clat (msec): min=68, max=3403, avg=755.13, stdev=429.83 00:22:18.372 lat (msec): min=116, max=3479, avg=762.95, stdev=438.35 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 209], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:22:18.372 | 30.00th=[ 384], 40.00th=[ 776], 50.00th=[ 818], 60.00th=[ 844], 00:22:18.372 | 70.00th=[ 995], 80.00th=[ 1053], 90.00th=[ 1099], 95.00th=[ 1385], 00:22:18.372 | 99.00th=[ 2106], 99.50th=[ 2165], 99.90th=[ 3373], 99.95th=[ 3406], 00:22:18.372 | 99.99th=[ 3406] 00:22:18.372 bw ( KiB/s): min=14336, max=419840, per=5.05%, avg=168625.57, stdev=109558.62, samples=14 00:22:18.372 iops : min= 14, max= 410, avg=164.57, stdev=107.01, samples=14 00:22:18.372 lat (msec) : 100=0.08%, 250=20.30%, 500=13.51%, 750=5.31%, 1000=32.01% 00:22:18.372 lat (msec) : 2000=26.93%, >=2000=1.87% 00:22:18.372 cpu : usr=0.06%, sys=2.22%, ctx=2635, majf=0, minf=32769 00:22:18.372 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.372 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988326: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=25, BW=25.4MiB/s (26.6MB/s)(307MiB/12084msec) 00:22:18.372 slat (usec): min=423, max=2097.7k, avg=39230.91, stdev=237115.56 00:22:18.372 clat (msec): min=38, max=8197, avg=4830.34, stdev=2548.02 00:22:18.372 lat (msec): min=712, max=8200, avg=4869.58, stdev=2533.53 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 1200], 5.00th=[ 1368], 10.00th=[ 1703], 20.00th=[ 2366], 00:22:18.372 | 30.00th=[ 3104], 40.00th=[ 3440], 50.00th=[ 3809], 60.00th=[ 5537], 00:22:18.372 | 70.00th=[ 7684], 80.00th=[ 7752], 90.00th=[ 7953], 95.00th=[ 8087], 00:22:18.372 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:22:18.372 | 99.99th=[ 8221] 00:22:18.372 bw ( KiB/s): min= 4096, max=112640, per=1.22%, avg=40732.44, stdev=39132.94, samples=9 00:22:18.372 iops : min= 4, max= 110, avg=39.78, stdev=38.22, samples=9 00:22:18.372 lat (msec) : 50=0.33%, 750=0.65%, 2000=14.01%, >=2000=85.02% 00:22:18.372 cpu : usr=0.00%, sys=0.71%, ctx=869, majf=0, minf=32769 00:22:18.372 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:22:18.372 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988327: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=10, BW=10.1MiB/s (10.6MB/s)(122MiB/12104msec) 00:22:18.372 slat (usec): min=1202, max=2096.7k, avg=98875.29, stdev=384200.89 00:22:18.372 clat (msec): min=40, max=12088, avg=3912.82, stdev=2497.27 00:22:18.372 lat (msec): min=2135, max=12103, avg=4011.69, stdev=2580.13 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 2140], 5.00th=[ 2198], 10.00th=[ 2265], 20.00th=[ 2601], 00:22:18.372 | 30.00th=[ 2735], 40.00th=[ 2970], 50.00th=[ 3239], 60.00th=[ 3473], 00:22:18.372 | 70.00th=[ 3641], 80.00th=[ 3876], 90.00th=[ 6409], 95.00th=[12013], 00:22:18.372 | 99.00th=[12013], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:18.372 | 99.99th=[12147] 00:22:18.372 lat (msec) : 50=0.82%, >=2000=99.18% 00:22:18.372 cpu : usr=0.00%, sys=0.55%, ctx=538, majf=0, minf=31233 00:22:18.372 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.6%, 16=13.1%, 32=26.2%, >=64=48.4% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:18.372 issued rwts: total=122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988328: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=27, BW=27.4MiB/s (28.7MB/s)(332MiB/12110msec) 00:22:18.372 slat (usec): min=539, max=2066.4k, avg=30272.04, stdev=173331.71 00:22:18.372 clat (msec): min=1326, max=6333, avg=3952.08, stdev=1818.29 00:22:18.372 lat (msec): min=1332, max=6355, avg=3982.35, stdev=1813.11 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 1334], 5.00th=[ 1385], 10.00th=[ 1653], 20.00th=[ 2299], 00:22:18.372 | 30.00th=[ 2668], 40.00th=[ 3004], 50.00th=[ 3406], 60.00th=[ 3842], 00:22:18.372 | 70.00th=[ 6074], 80.00th=[ 6141], 90.00th=[ 6208], 95.00th=[ 6275], 00:22:18.372 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:22:18.372 | 99.99th=[ 6342] 00:22:18.372 bw ( KiB/s): min= 1434, max=79872, per=1.25%, avg=41904.50, stdev=31064.33, samples=10 00:22:18.372 iops : min= 1, max= 78, avg=40.70, stdev=30.39, samples=10 00:22:18.372 lat (msec) : 2000=17.17%, >=2000=82.83% 00:22:18.372 cpu : usr=0.01%, sys=0.90%, ctx=960, majf=0, minf=32769 00:22:18.372 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.0% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:22:18.372 issued rwts: total=332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988329: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=127, BW=127MiB/s (133MB/s)(1282MiB/10089msec) 00:22:18.372 slat (usec): min=535, max=1241.8k, avg=7810.82, stdev=35964.75 00:22:18.372 clat (msec): min=68, max=3421, avg=748.09, stdev=419.92 00:22:18.372 lat (msec): min=116, max=3503, avg=755.90, stdev=428.78 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 209], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 245], 00:22:18.372 | 30.00th=[ 384], 40.00th=[ 776], 50.00th=[ 810], 60.00th=[ 844], 00:22:18.372 | 70.00th=[ 961], 80.00th=[ 1053], 90.00th=[ 1099], 95.00th=[ 1334], 00:22:18.372 | 99.00th=[ 2106], 99.50th=[ 2232], 99.90th=[ 3406], 99.95th=[ 3406], 00:22:18.372 | 99.99th=[ 3406] 00:22:18.372 bw ( KiB/s): min=112640, max=430080, per=5.44%, avg=181780.15, stdev=104161.66, samples=13 00:22:18.372 iops : min= 110, max= 420, avg=177.46, stdev=101.76, samples=13 00:22:18.372 lat (msec) : 100=0.08%, 250=20.36%, 500=13.26%, 750=4.76%, 1000=35.41% 00:22:18.372 lat (msec) : 2000=24.49%, >=2000=1.64% 00:22:18.372 cpu : usr=0.06%, sys=2.22%, ctx=2636, majf=0, minf=32769 00:22:18.372 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.372 issued rwts: total=1282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988330: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=20, BW=20.0MiB/s (21.0MB/s)(242MiB/12097msec) 00:22:18.372 slat (usec): min=644, max=2106.1k, avg=41405.20, stdev=231033.49 00:22:18.372 clat (msec): min=1127, max=8765, avg=3382.13, stdev=2180.27 00:22:18.372 lat (msec): min=1133, max=8788, avg=3423.54, stdev=2200.67 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 1133], 5.00th=[ 1150], 10.00th=[ 1167], 20.00th=[ 1267], 00:22:18.372 | 30.00th=[ 2467], 40.00th=[ 2769], 50.00th=[ 2937], 60.00th=[ 3138], 00:22:18.372 | 70.00th=[ 3339], 80.00th=[ 5134], 90.00th=[ 7483], 95.00th=[ 8658], 00:22:18.372 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:22:18.372 | 99.99th=[ 8792] 00:22:18.372 bw ( KiB/s): min= 1462, max=118784, per=1.76%, avg=58733.50, stdev=51332.15, samples=4 00:22:18.372 iops : min= 1, max= 116, avg=57.25, stdev=50.29, samples=4 00:22:18.372 lat (msec) : 2000=25.62%, >=2000=74.38% 00:22:18.372 cpu : usr=0.02%, sys=0.95%, ctx=613, majf=0, minf=32769 00:22:18.372 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.6%, 32=13.2%, >=64=74.0% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:22:18.372 issued rwts: total=242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988331: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=54, BW=54.8MiB/s (57.4MB/s)(662MiB/12085msec) 00:22:18.372 slat (usec): min=39, max=2041.2k, avg=15150.24, stdev=144806.59 00:22:18.372 clat (msec): min=365, max=8372, avg=1154.48, stdev=1586.23 00:22:18.372 lat (msec): min=369, max=8374, avg=1169.63, stdev=1610.51 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 368], 5.00th=[ 372], 10.00th=[ 372], 20.00th=[ 376], 00:22:18.372 | 30.00th=[ 380], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 384], 00:22:18.372 | 70.00th=[ 584], 80.00th=[ 2265], 90.00th=[ 2467], 95.00th=[ 4866], 00:22:18.372 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:22:18.372 | 99.99th=[ 8356] 00:22:18.372 bw ( KiB/s): min= 1475, max=348160, per=6.56%, avg=219021.40, stdev=156793.69, samples=5 00:22:18.372 iops : min= 1, max= 340, avg=213.80, stdev=153.27, samples=5 00:22:18.372 lat (msec) : 500=69.03%, 750=4.23%, >=2000=26.74% 00:22:18.372 cpu : usr=0.02%, sys=1.23%, ctx=706, majf=0, minf=32769 00:22:18.372 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:22:18.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.372 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.372 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.372 job4: (groupid=0, jobs=1): err= 0: pid=988333: Fri Jun 7 23:13:08 2024 00:22:18.372 read: IOPS=69, BW=69.1MiB/s (72.5MB/s)(834MiB/12063msec) 00:22:18.372 slat (usec): min=38, max=2108.9k, avg=11998.46, stdev=84952.58 00:22:18.372 clat (msec): min=381, max=5494, avg=1207.65, stdev=948.59 00:22:18.372 lat (msec): min=383, max=5500, avg=1219.65, stdev=962.31 00:22:18.372 clat percentiles (msec): 00:22:18.372 | 1.00th=[ 384], 5.00th=[ 388], 10.00th=[ 388], 20.00th=[ 405], 00:22:18.372 | 30.00th=[ 435], 40.00th=[ 684], 50.00th=[ 852], 60.00th=[ 1234], 00:22:18.372 | 70.00th=[ 1452], 80.00th=[ 1787], 90.00th=[ 2769], 95.00th=[ 3171], 00:22:18.373 | 99.00th=[ 4178], 99.50th=[ 5403], 99.90th=[ 5470], 99.95th=[ 5470], 00:22:18.373 | 99.99th=[ 5470] 00:22:18.373 bw ( KiB/s): min= 8192, max=320894, per=4.33%, avg=144729.40, stdev=99744.31, samples=10 00:22:18.373 iops : min= 8, max= 313, avg=141.30, stdev=97.33, samples=10 00:22:18.373 lat (msec) : 500=32.13%, 750=15.59%, 1000=7.19%, 2000=27.10%, >=2000=17.99% 00:22:18.373 cpu : usr=0.02%, sys=0.94%, ctx=1570, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.4% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.373 issued rwts: total=834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988340: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=156, BW=157MiB/s (164MB/s)(1584MiB/10108msec) 00:22:18.373 slat (usec): min=40, max=2053.4k, avg=6317.71, stdev=61688.19 00:22:18.373 clat (msec): min=96, max=4483, avg=783.05, stdev=991.63 00:22:18.373 lat (msec): min=172, max=4486, avg=789.37, stdev=996.45 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:22:18.373 | 30.00th=[ 197], 40.00th=[ 205], 50.00th=[ 241], 60.00th=[ 279], 00:22:18.373 | 70.00th=[ 852], 80.00th=[ 1418], 90.00th=[ 2039], 95.00th=[ 3239], 00:22:18.373 | 99.00th=[ 4329], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:22:18.373 | 99.99th=[ 4463] 00:22:18.373 bw ( KiB/s): min=26624, max=686080, per=6.38%, avg=212980.86, stdev=246145.71, samples=14 00:22:18.373 iops : min= 26, max= 670, avg=207.93, stdev=240.41, samples=14 00:22:18.373 lat (msec) : 100=0.06%, 250=52.15%, 500=12.44%, 750=3.47%, 1000=4.92% 00:22:18.373 lat (msec) : 2000=15.72%, >=2000=11.24% 00:22:18.373 cpu : usr=0.02%, sys=1.95%, ctx=2944, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.373 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988341: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=23, BW=24.0MiB/s (25.1MB/s)(288MiB/12022msec) 00:22:18.373 slat (usec): min=60, max=2033.5k, avg=34857.32, stdev=216834.26 00:22:18.373 clat (msec): min=473, max=7083, avg=2998.81, stdev=2038.61 00:22:18.373 lat (msec): min=475, max=7138, avg=3033.66, stdev=2048.00 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 477], 5.00th=[ 481], 10.00th=[ 514], 20.00th=[ 776], 00:22:18.373 | 30.00th=[ 1200], 40.00th=[ 1452], 50.00th=[ 2769], 60.00th=[ 4463], 00:22:18.373 | 70.00th=[ 4530], 80.00th=[ 4665], 90.00th=[ 4799], 95.00th=[ 7013], 00:22:18.373 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:22:18.373 | 99.99th=[ 7080] 00:22:18.373 bw ( KiB/s): min= 1542, max=218698, per=2.46%, avg=82196.00, stdev=98287.27, samples=4 00:22:18.373 iops : min= 1, max= 213, avg=80.00, stdev=95.86, samples=4 00:22:18.373 lat (msec) : 500=9.03%, 750=10.42%, 1000=6.60%, 2000=17.01%, >=2000=56.94% 00:22:18.373 cpu : usr=0.01%, sys=0.58%, ctx=1004, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.1%, >=64=78.1% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:22:18.373 issued rwts: total=288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988342: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=49, BW=49.4MiB/s (51.8MB/s)(594MiB/12019msec) 00:22:18.373 slat (usec): min=38, max=2041.1k, avg=16891.70, stdev=100415.21 00:22:18.373 clat (msec): min=411, max=5967, avg=2162.94, stdev=1181.12 00:22:18.373 lat (msec): min=422, max=6870, avg=2179.84, stdev=1187.86 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 430], 5.00th=[ 510], 10.00th=[ 609], 20.00th=[ 1133], 00:22:18.373 | 30.00th=[ 1284], 40.00th=[ 1435], 50.00th=[ 1838], 60.00th=[ 3071], 00:22:18.373 | 70.00th=[ 3239], 80.00th=[ 3406], 90.00th=[ 3574], 95.00th=[ 3775], 00:22:18.373 | 99.00th=[ 3876], 99.50th=[ 5873], 99.90th=[ 5940], 99.95th=[ 5940], 00:22:18.373 | 99.99th=[ 5940] 00:22:18.373 bw ( KiB/s): min= 1546, max=284103, per=2.38%, avg=79572.50, stdev=76989.94, samples=12 00:22:18.373 iops : min= 1, max= 277, avg=77.50, stdev=75.06, samples=12 00:22:18.373 lat (msec) : 500=4.04%, 750=10.27%, 1000=3.70%, 2000=37.21%, >=2000=44.78% 00:22:18.373 cpu : usr=0.04%, sys=0.86%, ctx=1476, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.373 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988344: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=150, BW=151MiB/s (158MB/s)(1528MiB/10131msec) 00:22:18.373 slat (usec): min=39, max=2136.8k, avg=6563.35, stdev=81316.51 00:22:18.373 clat (msec): min=95, max=3075, avg=735.90, stdev=853.89 00:22:18.373 lat (msec): min=123, max=3078, avg=742.46, stdev=856.82 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 126], 5.00th=[ 207], 10.00th=[ 230], 20.00th=[ 253], 00:22:18.373 | 30.00th=[ 257], 40.00th=[ 271], 50.00th=[ 326], 60.00th=[ 368], 00:22:18.373 | 70.00th=[ 418], 80.00th=[ 953], 90.00th=[ 2500], 95.00th=[ 2567], 00:22:18.373 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3071], 99.95th=[ 3071], 00:22:18.373 | 99.99th=[ 3071] 00:22:18.373 bw ( KiB/s): min=51200, max=509952, per=8.58%, avg=286677.00, stdev=160106.42, samples=10 00:22:18.373 iops : min= 50, max= 498, avg=279.90, stdev=156.38, samples=10 00:22:18.373 lat (msec) : 100=0.07%, 250=17.60%, 500=55.17%, 750=2.88%, 1000=5.96% 00:22:18.373 lat (msec) : 2000=1.70%, >=2000=16.62% 00:22:18.373 cpu : usr=0.11%, sys=1.77%, ctx=2782, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.373 issued rwts: total=1528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988345: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(522MiB/10178msec) 00:22:18.373 slat (usec): min=68, max=2165.9k, avg=19266.83, stdev=129741.16 00:22:18.373 clat (msec): min=117, max=3825, avg=2368.13, stdev=1006.66 00:22:18.373 lat (msec): min=1010, max=3829, avg=2387.39, stdev=1001.32 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 1053], 5.00th=[ 1133], 10.00th=[ 1267], 20.00th=[ 1418], 00:22:18.373 | 30.00th=[ 1552], 40.00th=[ 1569], 50.00th=[ 1620], 60.00th=[ 3104], 00:22:18.373 | 70.00th=[ 3373], 80.00th=[ 3540], 90.00th=[ 3641], 95.00th=[ 3675], 00:22:18.373 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3809], 99.95th=[ 3809], 00:22:18.373 | 99.99th=[ 3809] 00:22:18.373 bw ( KiB/s): min=20480, max=126976, per=2.20%, avg=73334.36, stdev=34408.89, samples=11 00:22:18.373 iops : min= 20, max= 124, avg=71.55, stdev=33.51, samples=11 00:22:18.373 lat (msec) : 250=0.19%, 2000=51.15%, >=2000=48.66% 00:22:18.373 cpu : usr=0.02%, sys=1.45%, ctx=1550, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:18.373 issued rwts: total=522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988346: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=46, BW=46.7MiB/s (49.0MB/s)(470MiB/10057msec) 00:22:18.373 slat (usec): min=574, max=2181.3k, avg=21295.58, stdev=160498.02 00:22:18.373 clat (msec): min=45, max=5848, avg=2512.67, stdev=2053.83 00:22:18.373 lat (msec): min=61, max=5858, avg=2533.97, stdev=2055.97 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 109], 5.00th=[ 542], 10.00th=[ 550], 20.00th=[ 567], 00:22:18.373 | 30.00th=[ 693], 40.00th=[ 844], 50.00th=[ 2366], 60.00th=[ 2567], 00:22:18.373 | 70.00th=[ 2769], 80.00th=[ 5604], 90.00th=[ 5738], 95.00th=[ 5805], 00:22:18.373 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:22:18.373 | 99.99th=[ 5873] 00:22:18.373 bw ( KiB/s): min= 2043, max=228918, per=2.33%, avg=77772.56, stdev=87199.86, samples=9 00:22:18.373 iops : min= 1, max= 223, avg=75.78, stdev=85.15, samples=9 00:22:18.373 lat (msec) : 50=0.21%, 100=0.64%, 250=0.21%, 750=35.74%, 1000=5.74% 00:22:18.373 lat (msec) : 2000=2.55%, >=2000=54.89% 00:22:18.373 cpu : usr=0.00%, sys=1.10%, ctx=1833, majf=0, minf=32769 00:22:18.373 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:22:18.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.373 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:18.373 issued rwts: total=470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.373 job5: (groupid=0, jobs=1): err= 0: pid=988347: Fri Jun 7 23:13:08 2024 00:22:18.373 read: IOPS=2, BW=2654KiB/s (2718kB/s)(26.0MiB/10031msec) 00:22:18.373 slat (usec): min=1552, max=2142.2k, avg=385155.53, stdev=769873.94 00:22:18.373 clat (msec): min=16, max=9977, avg=5281.50, stdev=3913.00 00:22:18.373 lat (msec): min=46, max=10030, avg=5666.66, stdev=3866.58 00:22:18.373 clat percentiles (msec): 00:22:18.373 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 100], 00:22:18.373 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 6611], 00:22:18.373 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10000], 95.00th=[10000], 00:22:18.373 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:22:18.373 | 99.99th=[10000] 00:22:18.373 lat (msec) : 20=3.85%, 50=3.85%, 100=15.38%, 250=3.85%, >=2000=73.08% 00:22:18.373 cpu : usr=0.02%, sys=0.15%, ctx=92, majf=0, minf=6657 00:22:18.374 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:22:18.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.374 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:18.374 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.374 job5: (groupid=0, jobs=1): err= 0: pid=988348: Fri Jun 7 23:13:08 2024 00:22:18.374 read: IOPS=57, BW=57.6MiB/s (60.4MB/s)(580MiB/10062msec) 00:22:18.374 slat (usec): min=79, max=2029.3k, avg=17235.12, stdev=134085.79 00:22:18.374 clat (msec): min=60, max=7192, avg=2118.74, stdev=2281.85 00:22:18.374 lat (msec): min=63, max=7196, avg=2135.97, stdev=2290.61 00:22:18.374 clat percentiles (msec): 00:22:18.374 | 1.00th=[ 122], 5.00th=[ 300], 10.00th=[ 502], 20.00th=[ 718], 00:22:18.374 | 30.00th=[ 743], 40.00th=[ 785], 50.00th=[ 810], 60.00th=[ 894], 00:22:18.374 | 70.00th=[ 2836], 80.00th=[ 5000], 90.00th=[ 6879], 95.00th=[ 7080], 00:22:18.374 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:22:18.374 | 99.99th=[ 7215] 00:22:18.374 bw ( KiB/s): min=10240, max=172032, per=2.52%, avg=84304.18, stdev=59012.26, samples=11 00:22:18.374 iops : min= 10, max= 168, avg=82.09, stdev=57.62, samples=11 00:22:18.374 lat (msec) : 100=0.86%, 250=2.59%, 500=6.38%, 750=21.21%, 1000=32.93% 00:22:18.374 lat (msec) : 2000=4.48%, >=2000=31.55% 00:22:18.374 cpu : usr=0.09%, sys=1.32%, ctx=1057, majf=0, minf=32769 00:22:18.374 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:22:18.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.374 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.374 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.374 job5: (groupid=0, jobs=1): err= 0: pid=988349: Fri Jun 7 23:13:08 2024 00:22:18.374 read: IOPS=93, BW=93.9MiB/s (98.5MB/s)(953MiB/10148msec) 00:22:18.374 slat (usec): min=48, max=2179.3k, avg=10541.93, stdev=97629.99 00:22:18.374 clat (msec): min=95, max=3695, avg=1303.79, stdev=1075.20 00:22:18.374 lat (msec): min=397, max=3701, avg=1314.33, stdev=1077.14 00:22:18.374 clat percentiles (msec): 00:22:18.374 | 1.00th=[ 401], 5.00th=[ 414], 10.00th=[ 426], 20.00th=[ 464], 00:22:18.374 | 30.00th=[ 735], 40.00th=[ 760], 50.00th=[ 768], 60.00th=[ 810], 00:22:18.374 | 70.00th=[ 978], 80.00th=[ 2534], 90.00th=[ 3440], 95.00th=[ 3608], 00:22:18.374 | 99.00th=[ 3675], 99.50th=[ 3675], 99.90th=[ 3708], 99.95th=[ 3708], 00:22:18.374 | 99.99th=[ 3708] 00:22:18.374 bw ( KiB/s): min= 4096, max=315392, per=4.21%, avg=140768.25, stdev=88853.38, samples=12 00:22:18.374 iops : min= 4, max= 308, avg=137.42, stdev=86.74, samples=12 00:22:18.374 lat (msec) : 100=0.10%, 500=22.56%, 750=11.54%, 1000=37.36%, 2000=1.78% 00:22:18.374 lat (msec) : >=2000=26.65% 00:22:18.374 cpu : usr=0.01%, sys=1.86%, ctx=2140, majf=0, minf=32769 00:22:18.374 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:22:18.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.374 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.374 issued rwts: total=953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.374 job5: (groupid=0, jobs=1): err= 0: pid=988350: Fri Jun 7 23:13:08 2024 00:22:18.374 read: IOPS=54, BW=54.9MiB/s (57.6MB/s)(553MiB/10071msec) 00:22:18.374 slat (usec): min=42, max=2060.3k, avg=18079.43, stdev=134014.62 00:22:18.374 clat (msec): min=68, max=4134, avg=1548.07, stdev=1102.51 00:22:18.374 lat (msec): min=117, max=4151, avg=1566.15, stdev=1110.40 00:22:18.374 clat percentiles (msec): 00:22:18.374 | 1.00th=[ 165], 5.00th=[ 397], 10.00th=[ 726], 20.00th=[ 785], 00:22:18.374 | 30.00th=[ 785], 40.00th=[ 793], 50.00th=[ 844], 60.00th=[ 1401], 00:22:18.374 | 70.00th=[ 1536], 80.00th=[ 3104], 90.00th=[ 3339], 95.00th=[ 3507], 00:22:18.374 | 99.00th=[ 3641], 99.50th=[ 4077], 99.90th=[ 4144], 99.95th=[ 4144], 00:22:18.374 | 99.99th=[ 4144] 00:22:18.374 bw ( KiB/s): min=26624, max=167936, per=2.90%, avg=96918.22, stdev=44871.92, samples=9 00:22:18.374 iops : min= 26, max= 164, avg=94.56, stdev=43.83, samples=9 00:22:18.374 lat (msec) : 100=0.18%, 250=2.53%, 500=3.98%, 750=3.44%, 1000=43.58% 00:22:18.374 lat (msec) : 2000=19.17%, >=2000=27.12% 00:22:18.374 cpu : usr=0.07%, sys=1.33%, ctx=780, majf=0, minf=32769 00:22:18.374 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:22:18.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.374 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:18.374 issued rwts: total=553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.374 job5: (groupid=0, jobs=1): err= 0: pid=988351: Fri Jun 7 23:13:08 2024 00:22:18.374 read: IOPS=24, BW=25.0MiB/s (26.2MB/s)(252MiB/10093msec) 00:22:18.374 slat (usec): min=717, max=2140.7k, avg=39703.87, stdev=238183.70 00:22:18.374 clat (msec): min=86, max=6726, avg=2530.78, stdev=1681.63 00:22:18.374 lat (msec): min=619, max=6730, avg=2570.48, stdev=1690.02 00:22:18.374 clat percentiles (msec): 00:22:18.374 | 1.00th=[ 634], 5.00th=[ 667], 10.00th=[ 701], 20.00th=[ 818], 00:22:18.374 | 30.00th=[ 961], 40.00th=[ 2400], 50.00th=[ 2769], 60.00th=[ 3171], 00:22:18.374 | 70.00th=[ 3239], 80.00th=[ 3306], 90.00th=[ 5336], 95.00th=[ 6678], 00:22:18.374 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:22:18.374 | 99.99th=[ 6745] 00:22:18.374 bw ( KiB/s): min= 2048, max=157696, per=1.92%, avg=64000.00, stdev=66560.00, samples=4 00:22:18.374 iops : min= 2, max= 154, avg=62.50, stdev=65.00, samples=4 00:22:18.374 lat (msec) : 100=0.40%, 750=16.27%, 1000=16.27%, 2000=5.16%, >=2000=61.90% 00:22:18.374 cpu : usr=0.00%, sys=0.77%, ctx=910, majf=0, minf=32769 00:22:18.374 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:22:18.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.374 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:22:18.374 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.374 job5: (groupid=0, jobs=1): err= 0: pid=988352: Fri Jun 7 23:13:08 2024 00:22:18.374 read: IOPS=43, BW=43.8MiB/s (46.0MB/s)(444MiB/10128msec) 00:22:18.374 slat (usec): min=1653, max=2171.6k, avg=22538.84, stdev=179425.94 00:22:18.374 clat (msec): min=118, max=6486, avg=1400.43, stdev=1322.72 00:22:18.374 lat (msec): min=494, max=6500, avg=1422.97, stdev=1343.61 00:22:18.374 clat percentiles (msec): 00:22:18.374 | 1.00th=[ 493], 5.00th=[ 498], 10.00th=[ 502], 20.00th=[ 518], 00:22:18.374 | 30.00th=[ 531], 40.00th=[ 550], 50.00th=[ 625], 60.00th=[ 785], 00:22:18.374 | 70.00th=[ 2366], 80.00th=[ 2534], 90.00th=[ 2702], 95.00th=[ 2970], 00:22:18.375 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:22:18.375 | 99.99th=[ 6477] 00:22:18.375 bw ( KiB/s): min= 2048, max=253952, per=3.89%, avg=129843.20, stdev=117332.71, samples=5 00:22:18.375 iops : min= 2, max= 248, avg=126.80, stdev=114.58, samples=5 00:22:18.375 lat (msec) : 250=0.23%, 500=7.88%, 750=50.45%, 1000=7.21%, >=2000=34.23% 00:22:18.375 cpu : usr=0.01%, sys=0.99%, ctx=1697, majf=0, minf=32769 00:22:18.375 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:22:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.375 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:18.375 issued rwts: total=444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.375 job5: (groupid=0, jobs=1): err= 0: pid=988353: Fri Jun 7 23:13:08 2024 00:22:18.375 read: IOPS=67, BW=67.9MiB/s (71.2MB/s)(820MiB/12077msec) 00:22:18.375 slat (usec): min=39, max=2026.3k, avg=12304.04, stdev=111669.19 00:22:18.375 clat (msec): min=327, max=6119, avg=1116.86, stdev=1143.02 00:22:18.375 lat (msec): min=329, max=6130, avg=1129.17, stdev=1156.84 00:22:18.375 clat percentiles (msec): 00:22:18.375 | 1.00th=[ 342], 5.00th=[ 372], 10.00th=[ 393], 20.00th=[ 397], 00:22:18.375 | 30.00th=[ 409], 40.00th=[ 514], 50.00th=[ 558], 60.00th=[ 726], 00:22:18.375 | 70.00th=[ 1020], 80.00th=[ 1284], 90.00th=[ 2970], 95.00th=[ 3104], 00:22:18.375 | 99.00th=[ 4799], 99.50th=[ 6074], 99.90th=[ 6141], 99.95th=[ 6141], 00:22:18.375 | 99.99th=[ 6141] 00:22:18.375 bw ( KiB/s): min= 2048, max=325632, per=4.72%, avg=157696.00, stdev=116421.19, samples=9 00:22:18.375 iops : min= 2, max= 318, avg=154.00, stdev=113.69, samples=9 00:22:18.375 lat (msec) : 500=38.05%, 750=22.56%, 1000=8.78%, 2000=11.46%, >=2000=19.15% 00:22:18.375 cpu : usr=0.03%, sys=1.22%, ctx=934, majf=0, minf=32769 00:22:18.375 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:22:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.375 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:18.375 issued rwts: total=820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:18.375 00:22:18.375 Run status group 0 (all jobs): 00:22:18.375 READ: bw=3262MiB/s (3420MB/s), 1357KiB/s-227MiB/s (1390kB/s-238MB/s), io=38.8GiB (41.7GB), run=10027-12179msec 00:22:18.375 00:22:18.375 Disk stats (read/write): 00:22:18.375 nvme0n1: ios=28860/0, merge=0/0, ticks=6686598/0, in_queue=6686598, util=98.58% 00:22:18.375 nvme1n1: ios=23579/0, merge=0/0, ticks=6559448/0, in_queue=6559448, util=98.93% 00:22:18.375 nvme2n1: ios=68580/0, merge=0/0, ticks=6226274/0, in_queue=6226274, util=99.01% 00:22:18.375 nvme3n1: ios=59612/0, merge=0/0, ticks=7142346/0, in_queue=7142346, util=98.77% 00:22:18.375 nvme4n1: ios=66838/0, merge=0/0, ticks=6673554/0, in_queue=6673554, util=98.81% 00:22:18.375 nvme5n1: ios=68890/0, merge=0/0, ticks=5514200/0, in_queue=5514200, util=99.30% 00:22:18.375 23:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:22:18.375 23:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:22:18.375 23:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:18.375 23:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:22:18.375 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000000 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000000 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:18.375 23:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:18.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000001 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000001 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:18.375 23:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:19.305 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000002 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000002 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:19.305 23:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:20.235 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000003 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000003 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:20.235 23:13:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:21.167 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000004 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000004 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:21.167 23:13:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:22.098 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000005 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000005 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:22.098 rmmod nvme_rdma 00:22:22.098 rmmod nvme_fabrics 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:22:22.098 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 986709 ']' 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 986709 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@949 -- # '[' -z 986709 ']' 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # kill -0 986709 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # uname 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:22.099 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 986709 00:22:22.356 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:22.356 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:22.356 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # echo 'killing process with pid 986709' 00:22:22.356 killing process with pid 986709 00:22:22.356 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # kill 986709 00:22:22.356 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # wait 986709 00:22:22.613 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.613 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:22.613 00:22:22.613 real 0m32.492s 00:22:22.613 user 1m53.547s 00:22:22.613 sys 0m14.423s 00:22:22.613 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:22.613 23:13:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:22.613 ************************************ 00:22:22.614 END TEST nvmf_srq_overwhelm 00:22:22.614 ************************************ 00:22:22.614 23:13:14 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:22.614 23:13:14 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:22.614 23:13:14 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:22.614 23:13:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:22.614 ************************************ 00:22:22.614 START TEST nvmf_shutdown 00:22:22.614 ************************************ 00:22:22.614 23:13:14 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:22.872 * Looking for test storage... 00:22:22.872 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.872 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:22.873 ************************************ 00:22:22.873 START TEST nvmf_shutdown_tc1 00:22:22.873 ************************************ 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.873 23:13:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:29.431 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:29.431 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:29.431 Found net devices under 0000:da:00.0: mlx_0_0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:29.431 Found net devices under 0000:da:00.1: mlx_0_1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:29.431 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:29.431 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:29.431 altname enp218s0f0np0 00:22:29.431 altname ens818f0np0 00:22:29.431 inet 192.168.100.8/24 scope global mlx_0_0 00:22:29.431 valid_lft forever preferred_lft forever 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:29.431 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:29.431 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:29.431 altname enp218s0f1np1 00:22:29.431 altname ens818f1np1 00:22:29.431 inet 192.168.100.9/24 scope global mlx_0_1 00:22:29.431 valid_lft forever preferred_lft forever 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.431 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:29.432 192.168.100.9' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:29.432 192.168.100.9' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:29.432 192.168.100.9' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=994746 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 994746 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 994746 ']' 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:29.432 23:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.432 [2024-06-07 23:13:21.438349] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:29.432 [2024-06-07 23:13:21.438394] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.432 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.432 [2024-06-07 23:13:21.496739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.432 [2024-06-07 23:13:21.573224] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.432 [2024-06-07 23:13:21.573266] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.432 [2024-06-07 23:13:21.573274] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.432 [2024-06-07 23:13:21.573280] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.432 [2024-06-07 23:13:21.573285] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.432 [2024-06-07 23:13:21.573395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.432 [2024-06-07 23:13:21.573498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.432 [2024-06-07 23:13:21.573603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.432 [2024-06-07 23:13:21.573604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:29.992 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:29.992 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:22:29.992 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.992 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:29.992 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.248 [2024-06-07 23:13:22.325266] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7ffcc0/0x8041b0) succeed. 00:22:30.248 [2024-06-07 23:13:22.334308] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x801300/0x845840) succeed. 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.248 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.505 Malloc1 00:22:30.505 [2024-06-07 23:13:22.553236] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:30.505 Malloc2 00:22:30.505 Malloc3 00:22:30.505 Malloc4 00:22:30.505 Malloc5 00:22:30.505 Malloc6 00:22:30.762 Malloc7 00:22:30.762 Malloc8 00:22:30.762 Malloc9 00:22:30.762 Malloc10 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=995053 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 995053 /var/tmp/bdevperf.sock 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 995053 ']' 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.762 { 00:22:30.762 "params": { 00:22:30.762 "name": "Nvme$subsystem", 00:22:30.762 "trtype": "$TEST_TRANSPORT", 00:22:30.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.762 "adrfam": "ipv4", 00:22:30.762 "trsvcid": "$NVMF_PORT", 00:22:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.762 "hdgst": ${hdgst:-false}, 00:22:30.762 "ddgst": ${ddgst:-false} 00:22:30.762 }, 00:22:30.762 "method": "bdev_nvme_attach_controller" 00:22:30.762 } 00:22:30.762 EOF 00:22:30.762 )") 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.762 { 00:22:30.762 "params": { 00:22:30.762 "name": "Nvme$subsystem", 00:22:30.762 "trtype": "$TEST_TRANSPORT", 00:22:30.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.762 "adrfam": "ipv4", 00:22:30.762 "trsvcid": "$NVMF_PORT", 00:22:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.762 "hdgst": ${hdgst:-false}, 00:22:30.762 "ddgst": ${ddgst:-false} 00:22:30.762 }, 00:22:30.762 "method": "bdev_nvme_attach_controller" 00:22:30.762 } 00:22:30.762 EOF 00:22:30.762 )") 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.762 { 00:22:30.762 "params": { 00:22:30.762 "name": "Nvme$subsystem", 00:22:30.762 "trtype": "$TEST_TRANSPORT", 00:22:30.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.762 "adrfam": "ipv4", 00:22:30.762 "trsvcid": "$NVMF_PORT", 00:22:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.762 "hdgst": ${hdgst:-false}, 00:22:30.762 "ddgst": ${ddgst:-false} 00:22:30.762 }, 00:22:30.762 "method": "bdev_nvme_attach_controller" 00:22:30.762 } 00:22:30.762 EOF 00:22:30.762 )") 00:22:30.762 23:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.762 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.762 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.762 { 00:22:30.762 "params": { 00:22:30.762 "name": "Nvme$subsystem", 00:22:30.762 "trtype": "$TEST_TRANSPORT", 00:22:30.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.762 "adrfam": "ipv4", 00:22:30.762 "trsvcid": "$NVMF_PORT", 00:22:30.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.762 "hdgst": ${hdgst:-false}, 00:22:30.762 "ddgst": ${ddgst:-false} 00:22:30.762 }, 00:22:30.762 "method": "bdev_nvme_attach_controller" 00:22:30.762 } 00:22:30.762 EOF 00:22:30.763 )") 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.763 { 00:22:30.763 "params": { 00:22:30.763 "name": "Nvme$subsystem", 00:22:30.763 "trtype": "$TEST_TRANSPORT", 00:22:30.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.763 "adrfam": "ipv4", 00:22:30.763 "trsvcid": "$NVMF_PORT", 00:22:30.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.763 "hdgst": ${hdgst:-false}, 00:22:30.763 "ddgst": ${ddgst:-false} 00:22:30.763 }, 00:22:30.763 "method": "bdev_nvme_attach_controller" 00:22:30.763 } 00:22:30.763 EOF 00:22:30.763 )") 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.763 { 00:22:30.763 "params": { 00:22:30.763 "name": "Nvme$subsystem", 00:22:30.763 "trtype": "$TEST_TRANSPORT", 00:22:30.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.763 "adrfam": "ipv4", 00:22:30.763 "trsvcid": "$NVMF_PORT", 00:22:30.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.763 "hdgst": ${hdgst:-false}, 00:22:30.763 "ddgst": ${ddgst:-false} 00:22:30.763 }, 00:22:30.763 "method": "bdev_nvme_attach_controller" 00:22:30.763 } 00:22:30.763 EOF 00:22:30.763 )") 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.763 { 00:22:30.763 "params": { 00:22:30.763 "name": "Nvme$subsystem", 00:22:30.763 "trtype": "$TEST_TRANSPORT", 00:22:30.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.763 "adrfam": "ipv4", 00:22:30.763 "trsvcid": "$NVMF_PORT", 00:22:30.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.763 "hdgst": ${hdgst:-false}, 00:22:30.763 "ddgst": ${ddgst:-false} 00:22:30.763 }, 00:22:30.763 "method": "bdev_nvme_attach_controller" 00:22:30.763 } 00:22:30.763 EOF 00:22:30.763 )") 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.763 [2024-06-07 23:13:23.024864] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:30.763 [2024-06-07 23:13:23.024916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.763 { 00:22:30.763 "params": { 00:22:30.763 "name": "Nvme$subsystem", 00:22:30.763 "trtype": "$TEST_TRANSPORT", 00:22:30.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.763 "adrfam": "ipv4", 00:22:30.763 "trsvcid": "$NVMF_PORT", 00:22:30.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.763 "hdgst": ${hdgst:-false}, 00:22:30.763 "ddgst": ${ddgst:-false} 00:22:30.763 }, 00:22:30.763 "method": "bdev_nvme_attach_controller" 00:22:30.763 } 00:22:30.763 EOF 00:22:30.763 )") 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.763 { 00:22:30.763 "params": { 00:22:30.763 "name": "Nvme$subsystem", 00:22:30.763 "trtype": "$TEST_TRANSPORT", 00:22:30.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.763 "adrfam": "ipv4", 00:22:30.763 "trsvcid": "$NVMF_PORT", 00:22:30.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.763 "hdgst": ${hdgst:-false}, 00:22:30.763 "ddgst": ${ddgst:-false} 00:22:30.763 }, 00:22:30.763 "method": "bdev_nvme_attach_controller" 00:22:30.763 } 00:22:30.763 EOF 00:22:30.763 )") 00:22:30.763 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.018 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.018 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.018 { 00:22:31.018 "params": { 00:22:31.018 "name": "Nvme$subsystem", 00:22:31.018 "trtype": "$TEST_TRANSPORT", 00:22:31.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.018 "adrfam": "ipv4", 00:22:31.018 "trsvcid": "$NVMF_PORT", 00:22:31.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.018 "hdgst": ${hdgst:-false}, 00:22:31.018 "ddgst": ${ddgst:-false} 00:22:31.018 }, 00:22:31.018 "method": "bdev_nvme_attach_controller" 00:22:31.018 } 00:22:31.018 EOF 00:22:31.018 )") 00:22:31.018 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.018 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:31.018 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:31.018 23:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:31.018 "params": { 00:22:31.018 "name": "Nvme1", 00:22:31.018 "trtype": "rdma", 00:22:31.018 "traddr": "192.168.100.8", 00:22:31.018 "adrfam": "ipv4", 00:22:31.018 "trsvcid": "4420", 00:22:31.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.018 "hdgst": false, 00:22:31.018 "ddgst": false 00:22:31.018 }, 00:22:31.018 "method": "bdev_nvme_attach_controller" 00:22:31.018 },{ 00:22:31.018 "params": { 00:22:31.018 "name": "Nvme2", 00:22:31.018 "trtype": "rdma", 00:22:31.018 "traddr": "192.168.100.8", 00:22:31.018 "adrfam": "ipv4", 00:22:31.018 "trsvcid": "4420", 00:22:31.018 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.018 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.018 "hdgst": false, 00:22:31.018 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme3", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme4", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme5", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme6", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme7", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme8", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme9", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 },{ 00:22:31.019 "params": { 00:22:31.019 "name": "Nvme10", 00:22:31.019 "trtype": "rdma", 00:22:31.019 "traddr": "192.168.100.8", 00:22:31.019 "adrfam": "ipv4", 00:22:31.019 "trsvcid": "4420", 00:22:31.019 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.019 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.019 "hdgst": false, 00:22:31.019 "ddgst": false 00:22:31.019 }, 00:22:31.019 "method": "bdev_nvme_attach_controller" 00:22:31.019 }' 00:22:31.019 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.019 [2024-06-07 23:13:23.090998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.019 [2024-06-07 23:13:23.165585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.946 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:31.946 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:22:31.946 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:31.946 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.946 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.946 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.947 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 995053 00:22:31.947 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:31.947 23:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:32.875 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 995053 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 994746 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.875 { 00:22:32.875 "params": { 00:22:32.875 "name": "Nvme$subsystem", 00:22:32.875 "trtype": "$TEST_TRANSPORT", 00:22:32.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.875 "adrfam": "ipv4", 00:22:32.875 "trsvcid": "$NVMF_PORT", 00:22:32.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.875 "hdgst": ${hdgst:-false}, 00:22:32.875 "ddgst": ${ddgst:-false} 00:22:32.875 }, 00:22:32.875 "method": "bdev_nvme_attach_controller" 00:22:32.875 } 00:22:32.875 EOF 00:22:32.875 )") 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.875 { 00:22:32.875 "params": { 00:22:32.875 "name": "Nvme$subsystem", 00:22:32.875 "trtype": "$TEST_TRANSPORT", 00:22:32.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.875 "adrfam": "ipv4", 00:22:32.875 "trsvcid": "$NVMF_PORT", 00:22:32.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.875 "hdgst": ${hdgst:-false}, 00:22:32.875 "ddgst": ${ddgst:-false} 00:22:32.875 }, 00:22:32.875 "method": "bdev_nvme_attach_controller" 00:22:32.875 } 00:22:32.875 EOF 00:22:32.875 )") 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.875 { 00:22:32.875 "params": { 00:22:32.875 "name": "Nvme$subsystem", 00:22:32.875 "trtype": "$TEST_TRANSPORT", 00:22:32.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.875 "adrfam": "ipv4", 00:22:32.875 "trsvcid": "$NVMF_PORT", 00:22:32.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.875 "hdgst": ${hdgst:-false}, 00:22:32.875 "ddgst": ${ddgst:-false} 00:22:32.875 }, 00:22:32.875 "method": "bdev_nvme_attach_controller" 00:22:32.875 } 00:22:32.875 EOF 00:22:32.875 )") 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.875 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.875 { 00:22:32.875 "params": { 00:22:32.875 "name": "Nvme$subsystem", 00:22:32.875 "trtype": "$TEST_TRANSPORT", 00:22:32.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.875 "adrfam": "ipv4", 00:22:32.875 "trsvcid": "$NVMF_PORT", 00:22:32.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.875 "hdgst": ${hdgst:-false}, 00:22:32.875 "ddgst": ${ddgst:-false} 00:22:32.875 }, 00:22:32.875 "method": "bdev_nvme_attach_controller" 00:22:32.875 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.876 { 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme$subsystem", 00:22:32.876 "trtype": "$TEST_TRANSPORT", 00:22:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "$NVMF_PORT", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.876 "hdgst": ${hdgst:-false}, 00:22:32.876 "ddgst": ${ddgst:-false} 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.876 { 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme$subsystem", 00:22:32.876 "trtype": "$TEST_TRANSPORT", 00:22:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "$NVMF_PORT", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.876 "hdgst": ${hdgst:-false}, 00:22:32.876 "ddgst": ${ddgst:-false} 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.876 { 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme$subsystem", 00:22:32.876 "trtype": "$TEST_TRANSPORT", 00:22:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "$NVMF_PORT", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.876 "hdgst": ${hdgst:-false}, 00:22:32.876 "ddgst": ${ddgst:-false} 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 [2024-06-07 23:13:25.067366] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:32.876 [2024-06-07 23:13:25.067421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995510 ] 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.876 { 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme$subsystem", 00:22:32.876 "trtype": "$TEST_TRANSPORT", 00:22:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "$NVMF_PORT", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.876 "hdgst": ${hdgst:-false}, 00:22:32.876 "ddgst": ${ddgst:-false} 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.876 { 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme$subsystem", 00:22:32.876 "trtype": "$TEST_TRANSPORT", 00:22:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "$NVMF_PORT", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.876 "hdgst": ${hdgst:-false}, 00:22:32.876 "ddgst": ${ddgst:-false} 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.876 { 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme$subsystem", 00:22:32.876 "trtype": "$TEST_TRANSPORT", 00:22:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "$NVMF_PORT", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.876 "hdgst": ${hdgst:-false}, 00:22:32.876 "ddgst": ${ddgst:-false} 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 } 00:22:32.876 EOF 00:22:32.876 )") 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:32.876 23:13:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme1", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme2", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme3", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme4", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme5", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme6", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme7", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme8", 00:22:32.876 "trtype": "rdma", 00:22:32.876 "traddr": "192.168.100.8", 00:22:32.876 "adrfam": "ipv4", 00:22:32.876 "trsvcid": "4420", 00:22:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:32.876 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:32.876 "hdgst": false, 00:22:32.876 "ddgst": false 00:22:32.876 }, 00:22:32.876 "method": "bdev_nvme_attach_controller" 00:22:32.876 },{ 00:22:32.876 "params": { 00:22:32.876 "name": "Nvme9", 00:22:32.876 "trtype": "rdma", 00:22:32.877 "traddr": "192.168.100.8", 00:22:32.877 "adrfam": "ipv4", 00:22:32.877 "trsvcid": "4420", 00:22:32.877 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:32.877 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:32.877 "hdgst": false, 00:22:32.877 "ddgst": false 00:22:32.877 }, 00:22:32.877 "method": "bdev_nvme_attach_controller" 00:22:32.877 },{ 00:22:32.877 "params": { 00:22:32.877 "name": "Nvme10", 00:22:32.877 "trtype": "rdma", 00:22:32.877 "traddr": "192.168.100.8", 00:22:32.877 "adrfam": "ipv4", 00:22:32.877 "trsvcid": "4420", 00:22:32.877 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:32.877 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:32.877 "hdgst": false, 00:22:32.877 "ddgst": false 00:22:32.877 }, 00:22:32.877 "method": "bdev_nvme_attach_controller" 00:22:32.877 }' 00:22:32.877 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.877 [2024-06-07 23:13:25.131925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.172 [2024-06-07 23:13:25.207000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.134 Running I/O for 1 seconds... 00:22:35.066 00:22:35.066 Latency(us) 00:22:35.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.066 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme1n1 : 1.17 350.17 21.89 0.00 0.00 175435.43 6553.60 232684.01 00:22:35.066 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme2n1 : 1.17 367.99 23.00 0.00 0.00 169632.26 7115.34 227690.79 00:22:35.066 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme3n1 : 1.18 381.24 23.83 0.00 0.00 161225.60 7458.62 154789.79 00:22:35.066 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme4n1 : 1.18 383.42 23.96 0.00 0.00 158085.58 5024.43 147799.28 00:22:35.066 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme5n1 : 1.18 380.41 23.78 0.00 0.00 157387.13 8238.81 136814.20 00:22:35.066 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme6n1 : 1.18 380.03 23.75 0.00 0.00 155037.43 8613.30 129823.70 00:22:35.066 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme7n1 : 1.18 379.65 23.73 0.00 0.00 152934.19 8862.96 122333.87 00:22:35.066 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme8n1 : 1.18 379.26 23.70 0.00 0.00 150929.73 9237.46 114344.72 00:22:35.066 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme9n1 : 1.18 378.81 23.68 0.00 0.00 149176.04 9736.78 103359.63 00:22:35.066 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.066 Verification LBA range: start 0x0 length 0x400 00:22:35.066 Nvme10n1 : 1.19 377.39 23.59 0.00 0.00 147542.38 3510.86 101861.67 00:22:35.066 =================================================================================================================== 00:22:35.066 Total : 3758.38 234.90 0.00 0.00 157535.93 3510.86 232684.01 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:35.323 rmmod nvme_rdma 00:22:35.323 rmmod nvme_fabrics 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 994746 ']' 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 994746 00:22:35.323 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 994746 ']' 00:22:35.324 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 994746 00:22:35.324 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:22:35.324 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:35.324 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 994746 00:22:35.581 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:35.581 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:35.581 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 994746' 00:22:35.581 killing process with pid 994746 00:22:35.581 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 994746 00:22:35.581 23:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 994746 00:22:35.839 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.839 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:35.839 00:22:35.839 real 0m13.100s 00:22:35.839 user 0m30.681s 00:22:35.839 sys 0m5.755s 00:22:35.839 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:35.839 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.839 ************************************ 00:22:35.839 END TEST nvmf_shutdown_tc1 00:22:35.839 ************************************ 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.100 ************************************ 00:22:36.100 START TEST nvmf_shutdown_tc2 00:22:36.100 ************************************ 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:36.100 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:36.100 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:36.100 Found net devices under 0000:da:00.0: mlx_0_0 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:36.100 Found net devices under 0000:da:00.1: mlx_0_1 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:36.100 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:36.101 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.101 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:36.101 altname enp218s0f0np0 00:22:36.101 altname ens818f0np0 00:22:36.101 inet 192.168.100.8/24 scope global mlx_0_0 00:22:36.101 valid_lft forever preferred_lft forever 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:36.101 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.101 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:36.101 altname enp218s0f1np1 00:22:36.101 altname ens818f1np1 00:22:36.101 inet 192.168.100.9/24 scope global mlx_0_1 00:22:36.101 valid_lft forever preferred_lft forever 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:36.101 192.168.100.9' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:36.101 192.168.100.9' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:36.101 192.168.100.9' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:36.101 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:36.359 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:36.359 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=996078 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 996078 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 996078 ']' 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:36.360 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.360 [2024-06-07 23:13:28.439378] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:36.360 [2024-06-07 23:13:28.439429] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.360 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.360 [2024-06-07 23:13:28.500902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.360 [2024-06-07 23:13:28.574063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.360 [2024-06-07 23:13:28.574108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.360 [2024-06-07 23:13:28.574115] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.360 [2024-06-07 23:13:28.574120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.360 [2024-06-07 23:13:28.574125] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.360 [2024-06-07 23:13:28.574244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.360 [2024-06-07 23:13:28.574321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.360 [2024-06-07 23:13:28.574408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.360 [2024-06-07 23:13:28.574409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.617 [2024-06-07 23:13:28.742769] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2110cc0/0x21151b0) succeed. 00:22:36.617 [2024-06-07 23:13:28.751794] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2112300/0x2156840) succeed. 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.617 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.875 23:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.875 Malloc1 00:22:36.875 [2024-06-07 23:13:28.959631] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:36.875 Malloc2 00:22:36.875 Malloc3 00:22:36.875 Malloc4 00:22:36.875 Malloc5 00:22:37.132 Malloc6 00:22:37.133 Malloc7 00:22:37.133 Malloc8 00:22:37.133 Malloc9 00:22:37.133 Malloc10 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=996352 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 996352 /var/tmp/bdevperf.sock 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 996352 ']' 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.133 { 00:22:37.133 "params": { 00:22:37.133 "name": "Nvme$subsystem", 00:22:37.133 "trtype": "$TEST_TRANSPORT", 00:22:37.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.133 "adrfam": "ipv4", 00:22:37.133 "trsvcid": "$NVMF_PORT", 00:22:37.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.133 "hdgst": ${hdgst:-false}, 00:22:37.133 "ddgst": ${ddgst:-false} 00:22:37.133 }, 00:22:37.133 "method": "bdev_nvme_attach_controller" 00:22:37.133 } 00:22:37.133 EOF 00:22:37.133 )") 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.133 { 00:22:37.133 "params": { 00:22:37.133 "name": "Nvme$subsystem", 00:22:37.133 "trtype": "$TEST_TRANSPORT", 00:22:37.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.133 "adrfam": "ipv4", 00:22:37.133 "trsvcid": "$NVMF_PORT", 00:22:37.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.133 "hdgst": ${hdgst:-false}, 00:22:37.133 "ddgst": ${ddgst:-false} 00:22:37.133 }, 00:22:37.133 "method": "bdev_nvme_attach_controller" 00:22:37.133 } 00:22:37.133 EOF 00:22:37.133 )") 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.133 { 00:22:37.133 "params": { 00:22:37.133 "name": "Nvme$subsystem", 00:22:37.133 "trtype": "$TEST_TRANSPORT", 00:22:37.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.133 "adrfam": "ipv4", 00:22:37.133 "trsvcid": "$NVMF_PORT", 00:22:37.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.133 "hdgst": ${hdgst:-false}, 00:22:37.133 "ddgst": ${ddgst:-false} 00:22:37.133 }, 00:22:37.133 "method": "bdev_nvme_attach_controller" 00:22:37.133 } 00:22:37.133 EOF 00:22:37.133 )") 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.133 { 00:22:37.133 "params": { 00:22:37.133 "name": "Nvme$subsystem", 00:22:37.133 "trtype": "$TEST_TRANSPORT", 00:22:37.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.133 "adrfam": "ipv4", 00:22:37.133 "trsvcid": "$NVMF_PORT", 00:22:37.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.133 "hdgst": ${hdgst:-false}, 00:22:37.133 "ddgst": ${ddgst:-false} 00:22:37.133 }, 00:22:37.133 "method": "bdev_nvme_attach_controller" 00:22:37.133 } 00:22:37.133 EOF 00:22:37.133 )") 00:22:37.133 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.391 { 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme$subsystem", 00:22:37.391 "trtype": "$TEST_TRANSPORT", 00:22:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "$NVMF_PORT", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.391 "hdgst": ${hdgst:-false}, 00:22:37.391 "ddgst": ${ddgst:-false} 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 } 00:22:37.391 EOF 00:22:37.391 )") 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.391 { 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme$subsystem", 00:22:37.391 "trtype": "$TEST_TRANSPORT", 00:22:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "$NVMF_PORT", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.391 "hdgst": ${hdgst:-false}, 00:22:37.391 "ddgst": ${ddgst:-false} 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 } 00:22:37.391 EOF 00:22:37.391 )") 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.391 { 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme$subsystem", 00:22:37.391 "trtype": "$TEST_TRANSPORT", 00:22:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "$NVMF_PORT", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.391 "hdgst": ${hdgst:-false}, 00:22:37.391 "ddgst": ${ddgst:-false} 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 } 00:22:37.391 EOF 00:22:37.391 )") 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 [2024-06-07 23:13:29.427589] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:37.391 [2024-06-07 23:13:29.427646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996352 ] 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.391 { 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme$subsystem", 00:22:37.391 "trtype": "$TEST_TRANSPORT", 00:22:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "$NVMF_PORT", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.391 "hdgst": ${hdgst:-false}, 00:22:37.391 "ddgst": ${ddgst:-false} 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 } 00:22:37.391 EOF 00:22:37.391 )") 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.391 { 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme$subsystem", 00:22:37.391 "trtype": "$TEST_TRANSPORT", 00:22:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "$NVMF_PORT", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.391 "hdgst": ${hdgst:-false}, 00:22:37.391 "ddgst": ${ddgst:-false} 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 } 00:22:37.391 EOF 00:22:37.391 )") 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.391 { 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme$subsystem", 00:22:37.391 "trtype": "$TEST_TRANSPORT", 00:22:37.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "$NVMF_PORT", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.391 "hdgst": ${hdgst:-false}, 00:22:37.391 "ddgst": ${ddgst:-false} 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 } 00:22:37.391 EOF 00:22:37.391 )") 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:37.391 23:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme1", 00:22:37.391 "trtype": "rdma", 00:22:37.391 "traddr": "192.168.100.8", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme2", 00:22:37.391 "trtype": "rdma", 00:22:37.391 "traddr": "192.168.100.8", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme3", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme4", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme5", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme6", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme7", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme8", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme9", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 },{ 00:22:37.392 "params": { 00:22:37.392 "name": "Nvme10", 00:22:37.392 "trtype": "rdma", 00:22:37.392 "traddr": "192.168.100.8", 00:22:37.392 "adrfam": "ipv4", 00:22:37.392 "trsvcid": "4420", 00:22:37.392 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:37.392 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:37.392 "hdgst": false, 00:22:37.392 "ddgst": false 00:22:37.392 }, 00:22:37.392 "method": "bdev_nvme_attach_controller" 00:22:37.392 }' 00:22:37.392 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.392 [2024-06-07 23:13:29.490864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.392 [2024-06-07 23:13:29.564393] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.323 Running I/O for 10 seconds... 00:22:38.323 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:38.323 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:22:38.323 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:38.323 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.323 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.580 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.581 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.581 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.581 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.581 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=35 00:22:38.581 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 35 -ge 100 ']' 00:22:38.581 23:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:38.838 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:38.838 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:38.838 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.838 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.838 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.838 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 996352 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 996352 ']' 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 996352 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 996352 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 996352' 00:22:39.095 killing process with pid 996352 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 996352 00:22:39.095 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 996352 00:22:39.095 Received shutdown signal, test time was about 0.850750 seconds 00:22:39.095 00:22:39.095 Latency(us) 00:22:39.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.095 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.095 Verification LBA range: start 0x0 length 0x400 00:22:39.095 Nvme1n1 : 0.83 383.96 24.00 0.00 0.00 163111.08 2902.31 177758.60 00:22:39.096 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme2n1 : 0.83 383.30 23.96 0.00 0.00 160283.50 7427.41 161780.30 00:22:39.096 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme3n1 : 0.84 382.75 23.92 0.00 0.00 157081.55 7801.90 154789.79 00:22:39.096 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme4n1 : 0.84 382.11 23.88 0.00 0.00 154657.94 8363.64 143804.71 00:22:39.096 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme5n1 : 0.84 381.34 23.83 0.00 0.00 152368.08 9175.04 129823.70 00:22:39.096 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme6n1 : 0.84 380.57 23.79 0.00 0.00 149610.69 10173.68 114344.72 00:22:39.096 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme7n1 : 0.84 379.80 23.74 0.00 0.00 146837.31 11109.91 99365.06 00:22:39.096 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme8n1 : 0.84 379.05 23.69 0.00 0.00 144081.09 12108.56 107354.21 00:22:39.096 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme9n1 : 0.85 378.29 23.64 0.00 0.00 141318.83 13044.78 122333.87 00:22:39.096 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.096 Verification LBA range: start 0x0 length 0x400 00:22:39.096 Nvme10n1 : 0.85 301.14 18.82 0.00 0.00 173360.46 3011.54 205720.62 00:22:39.096 =================================================================================================================== 00:22:39.096 Total : 3732.31 233.27 0.00 0.00 153881.47 2902.31 205720.62 00:22:39.353 23:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 996078 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:40.722 rmmod nvme_rdma 00:22:40.722 rmmod nvme_fabrics 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 996078 ']' 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 996078 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 996078 ']' 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 996078 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 996078 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 996078' 00:22:40.722 killing process with pid 996078 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 996078 00:22:40.722 23:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 996078 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:40.979 00:22:40.979 real 0m4.958s 00:22:40.979 user 0m20.014s 00:22:40.979 sys 0m0.998s 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.979 ************************************ 00:22:40.979 END TEST nvmf_shutdown_tc2 00:22:40.979 ************************************ 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.979 ************************************ 00:22:40.979 START TEST nvmf_shutdown_tc3 00:22:40.979 ************************************ 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.979 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:40.980 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:40.980 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:40.980 Found net devices under 0000:da:00.0: mlx_0_0 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:40.980 Found net devices under 0000:da:00.1: mlx_0_1 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:40.980 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:41.236 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:41.237 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.237 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:41.237 altname enp218s0f0np0 00:22:41.237 altname ens818f0np0 00:22:41.237 inet 192.168.100.8/24 scope global mlx_0_0 00:22:41.237 valid_lft forever preferred_lft forever 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:41.237 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.237 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:41.237 altname enp218s0f1np1 00:22:41.237 altname ens818f1np1 00:22:41.237 inet 192.168.100.9/24 scope global mlx_0_1 00:22:41.237 valid_lft forever preferred_lft forever 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:41.237 192.168.100.9' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:41.237 192.168.100.9' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:41.237 192.168.100.9' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=997154 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 997154 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 997154 ']' 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:41.237 23:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:41.237 [2024-06-07 23:13:33.440277] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:41.237 [2024-06-07 23:13:33.440322] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.237 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.237 [2024-06-07 23:13:33.499333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.493 [2024-06-07 23:13:33.572847] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.493 [2024-06-07 23:13:33.572885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.493 [2024-06-07 23:13:33.572892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.493 [2024-06-07 23:13:33.572898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.493 [2024-06-07 23:13:33.572902] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.493 [2024-06-07 23:13:33.573021] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.493 [2024-06-07 23:13:33.573133] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.493 [2024-06-07 23:13:33.573241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.493 [2024-06-07 23:13:33.573242] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.055 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.055 [2024-06-07 23:13:34.296863] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f39cc0/0x1f3e1b0) succeed. 00:22:42.055 [2024-06-07 23:13:34.305985] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f3b300/0x1f7f840) succeed. 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.312 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.312 Malloc1 00:22:42.312 [2024-06-07 23:13:34.515523] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:42.312 Malloc2 00:22:42.312 Malloc3 00:22:42.568 Malloc4 00:22:42.568 Malloc5 00:22:42.568 Malloc6 00:22:42.568 Malloc7 00:22:42.568 Malloc8 00:22:42.568 Malloc9 00:22:42.825 Malloc10 00:22:42.825 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.825 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:42.825 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=997441 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 997441 /var/tmp/bdevperf.sock 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 997441 ']' 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 [2024-06-07 23:13:34.980139] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:42.826 [2024-06-07 23:13:34.980188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997441 ] 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.826 "name": "Nvme$subsystem", 00:22:42.826 "trtype": "$TEST_TRANSPORT", 00:22:42.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.826 "adrfam": "ipv4", 00:22:42.826 "trsvcid": "$NVMF_PORT", 00:22:42.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.826 "hdgst": ${hdgst:-false}, 00:22:42.826 "ddgst": ${ddgst:-false} 00:22:42.826 }, 00:22:42.826 "method": "bdev_nvme_attach_controller" 00:22:42.826 } 00:22:42.826 EOF 00:22:42.826 )") 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:42.826 23:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:42.826 { 00:22:42.826 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:42.827 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.827 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:42.827 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:42.827 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme1", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme2", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme3", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme4", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme5", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme6", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme7", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme8", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme9", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 },{ 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme10", 00:22:42.827 "trtype": "rdma", 00:22:42.827 "traddr": "192.168.100.8", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "4420", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:42.827 "hdgst": false, 00:22:42.827 "ddgst": false 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 }' 00:22:42.827 [2024-06-07 23:13:35.042818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.083 [2024-06-07 23:13:35.116852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.012 Running I/O for 10 seconds... 00:22:44.012 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:44.012 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:22:44.012 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:44.012 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.012 23:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:44.012 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:44.268 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:44.268 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:44.268 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.268 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.268 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.268 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=155 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 997154 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 997154 ']' 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 997154 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 997154 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 997154' 00:22:44.525 killing process with pid 997154 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 997154 00:22:44.525 23:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 997154 00:22:45.089 23:13:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:45.089 23:13:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:45.668 [2024-06-07 23:13:37.699479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.668 [2024-06-07 23:13:37.699523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.699536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.668 [2024-06-07 23:13:37.699542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.699565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.668 [2024-06-07 23:13:37.699572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.699578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.668 [2024-06-07 23:13:37.699585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.702007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.668 [2024-06-07 23:13:37.702053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.668 [2024-06-07 23:13:37.709509] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.719534] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.729562] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.739598] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.749624] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.759655] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.769694] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.779730] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.789755] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.799786] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.809827] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.668 [2024-06-07 23:13:37.819090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x183800 00:22:45.668 [2024-06-07 23:13:37.819273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182e00 00:22:45.668 [2024-06-07 23:13:37.819292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.668 [2024-06-07 23:13:37.819302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182e00 00:22:45.669 [2024-06-07 23:13:37.819530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182500 00:22:45.669 [2024-06-07 23:13:37.819547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182500 00:22:45.669 [2024-06-07 23:13:37.819563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182500 00:22:45.669 [2024-06-07 23:13:37.819579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182500 00:22:45.669 [2024-06-07 23:13:37.819596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182500 00:22:45.669 [2024-06-07 23:13:37.819612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003eef0c0 len:0x10000 key:0x182600 00:22:45.669 [2024-06-07 23:13:37.819629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003edf040 len:0x10000 key:0x182600 00:22:45.669 [2024-06-07 23:13:37.819645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001316d000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001314c000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130e9000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130c8000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001086f000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001084e000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001082d000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001080c000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107eb000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.669 [2024-06-07 23:13:37.819938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8b9000 len:0x10000 key:0x182800 00:22:45.669 [2024-06-07 23:13:37.819945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.819957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e898000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.819965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.819975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e877000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.819982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.819992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.819999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e835000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e814000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f3000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d2000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7b1000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e790000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000119b5000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011994000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.820212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.820220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2afb80 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b28fa80 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b24f880 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b23f800 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b22f780 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b21f700 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b20f680 len:0x10000 key:0x182800 00:22:45.670 [2024-06-07 23:13:37.823231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e5ec40 len:0x10000 key:0x182600 00:22:45.670 [2024-06-07 23:13:37.823249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e4ebc0 len:0x10000 key:0x182600 00:22:45.670 [2024-06-07 23:13:37.823263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e3eb40 len:0x10000 key:0x182600 00:22:45.670 [2024-06-07 23:13:37.823277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x182600 00:22:45.670 [2024-06-07 23:13:37.823292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x182600 00:22:45.670 [2024-06-07 23:13:37.823307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e9c0 len:0x10000 key:0x182600 00:22:45.670 [2024-06-07 23:13:37.823321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183900 00:22:45.670 [2024-06-07 23:13:37.823335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.670 [2024-06-07 23:13:37.823344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183900 00:22:45.670 [2024-06-07 23:13:37.823351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002a7640 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002171c0 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x183900 00:22:45.671 [2024-06-07 23:13:37.823512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071bfe00 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000718fc80 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000715fb00 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000711f900 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000710f880 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ef780 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070bf600 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000709f500 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.671 [2024-06-07 23:13:37.823903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000705f300 len:0x10000 key:0x182700 00:22:45.671 [2024-06-07 23:13:37.823912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.823921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x182700 00:22:45.672 [2024-06-07 23:13:37.823928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.823936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x182700 00:22:45.672 [2024-06-07 23:13:37.823942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.823950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x182700 00:22:45.672 [2024-06-07 23:13:37.823957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.823965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x182700 00:22:45.672 [2024-06-07 23:13:37.823971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.823978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x182800 00:22:45.672 [2024-06-07 23:13:37.823986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a7924b50 sqhd:0000 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826240] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:22:45.672 [2024-06-07 23:13:37.826261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfc80 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058fb00 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f700 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf500 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x182300 00:22:45.672 [2024-06-07 23:13:37.826614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.672 [2024-06-07 23:13:37.826624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049f380 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x182300 00:22:45.673 [2024-06-07 23:13:37.826788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195dff80 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001959fd80 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.826993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.826999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001951f980 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.827018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.827033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.827047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x183000 00:22:45.673 [2024-06-07 23:13:37.827061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x182700 00:22:45.673 [2024-06-07 23:13:37.827077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.827091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011721000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.827106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.827114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011742000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.827121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011763000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.834223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011784000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.834240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117a5000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.834257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117c6000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.834273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117e7000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.834289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011808000 len:0x10000 key:0x182800 00:22:45.673 [2024-06-07 23:13:37.834304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.673 [2024-06-07 23:13:37.834313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011829000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.834330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001184a000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.834345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001186b000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001188c000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.834378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ad000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.834393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.834410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x182800 00:22:45.674 [2024-06-07 23:13:37.834417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:41af440 sqhd:4520 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836212] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:22:45.674 [2024-06-07 23:13:37.836229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x183b00 00:22:45.674 [2024-06-07 23:13:37.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x181e00 00:22:45.674 [2024-06-07 23:13:37.836589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x181e00 00:22:45.674 [2024-06-07 23:13:37.836605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x181e00 00:22:45.674 [2024-06-07 23:13:37.836620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x181e00 00:22:45.674 [2024-06-07 23:13:37.836637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x183e00 00:22:45.674 [2024-06-07 23:13:37.836653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x183e00 00:22:45.674 [2024-06-07 23:13:37.836669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x183e00 00:22:45.674 [2024-06-07 23:13:37.836684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.674 [2024-06-07 23:13:37.836692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x183e00 00:22:45.674 [2024-06-07 23:13:37.836699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x183e00 00:22:45.675 [2024-06-07 23:13:37.836715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x183e00 00:22:45.675 [2024-06-07 23:13:37.836731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x183e00 00:22:45.675 [2024-06-07 23:13:37.836748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x183e00 00:22:45.675 [2024-06-07 23:13:37.836763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f411000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f0000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ec0000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d28d000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.836990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d24b000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.836997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d22a000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d0f000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cee000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ccd000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c49000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c28000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c07000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011be6000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011bc5000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba4000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.837275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x182800 00:22:45.675 [2024-06-07 23:13:37.837283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:5ad0 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.839562] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:22:45.675 [2024-06-07 23:13:37.839586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182a00 00:22:45.675 [2024-06-07 23:13:37.839597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.675 [2024-06-07 23:13:37.839613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182a00 00:22:45.675 [2024-06-07 23:13:37.839627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.839976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.839990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182a00 00:22:45.676 [2024-06-07 23:13:37.840170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.676 [2024-06-07 23:13:37.840480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182b00 00:22:45.676 [2024-06-07 23:13:37.840490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf700 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182b00 00:22:45.677 [2024-06-07 23:13:37.840671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x181e00 00:22:45.677 [2024-06-07 23:13:37.840694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d30000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d51000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e17000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e38000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e59000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.840983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.840993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.841006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011edd000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.841021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.841033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011efe000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.841043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.841056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f1f000 len:0x10000 key:0x182800 00:22:45.677 [2024-06-07 23:13:37.841067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e780 sqhd:0a80 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.843371] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:22:45.677 [2024-06-07 23:13:37.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x182f00 00:22:45.677 [2024-06-07 23:13:37.843406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.843422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x182f00 00:22:45.677 [2024-06-07 23:13:37.843432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.843445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x182f00 00:22:45.677 [2024-06-07 23:13:37.843455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.843468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x182f00 00:22:45.677 [2024-06-07 23:13:37.843478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.843491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x182f00 00:22:45.677 [2024-06-07 23:13:37.843504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.677 [2024-06-07 23:13:37.843517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x182f00 00:22:45.677 [2024-06-07 23:13:37.843527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x182f00 00:22:45.678 [2024-06-07 23:13:37.843918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.843940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.843962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.843985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.843998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.844013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.844039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.844061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.844084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.844107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x182d00 00:22:45.678 [2024-06-07 23:13:37.844129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x182c00 00:22:45.678 [2024-06-07 23:13:37.844152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dacd000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daac000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8b000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da49000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x182800 00:22:45.678 [2024-06-07 23:13:37.844357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.678 [2024-06-07 23:13:37.844370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9e6000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9c5000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9a4000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d983000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d962000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d941000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d920000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108d2000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108f3000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010914000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010935000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010956000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010977000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010998000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf2000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec13000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec34000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.844856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109b9000 len:0x10000 key:0x182800 00:22:45.679 [2024-06-07 23:13:37.844865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:ec70 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847334] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:22:45.679 [2024-06-07 23:13:37.847358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183400 00:22:45.679 [2024-06-07 23:13:37.847369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183400 00:22:45.679 [2024-06-07 23:13:37.847395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183400 00:22:45.679 [2024-06-07 23:13:37.847418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183400 00:22:45.679 [2024-06-07 23:13:37.847441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.679 [2024-06-07 23:13:37.847683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183600 00:22:45.679 [2024-06-07 23:13:37.847694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.847981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.847991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.848018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.848041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183600 00:22:45.680 [2024-06-07 23:13:37.848067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x182d00 00:22:45.680 [2024-06-07 23:13:37.848089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d97000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010db8000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010dd9000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010dfa000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e1b000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e3c000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e5d000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e7e000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e9f000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b5000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f894000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f873000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f852000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f831000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f810000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17d000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.680 [2024-06-07 23:13:37.848528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15c000 len:0x10000 key:0x182800 00:22:45.680 [2024-06-07 23:13:37.848538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13b000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11a000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0f9000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0b7000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f096000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f075000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f054000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f033000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f012000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eff1000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.848805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efd0000 len:0x10000 key:0x182800 00:22:45.681 [2024-06-07 23:13:37.848814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:1530 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850676] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:22:45.681 [2024-06-07 23:13:37.850700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.850981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.850991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.851004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.851051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.851064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.851074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.851087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.851097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.851110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.851120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.851132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.681 [2024-06-07 23:13:37.851155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183100 00:22:45.681 [2024-06-07 23:13:37.851165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183100 00:22:45.682 [2024-06-07 23:13:37.851188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183100 00:22:45.682 [2024-06-07 23:13:37.851211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183100 00:22:45.682 [2024-06-07 23:13:37.851234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183100 00:22:45.682 [2024-06-07 23:13:37.851259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183300 00:22:45.682 [2024-06-07 23:13:37.851582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183200 00:22:45.682 [2024-06-07 23:13:37.851605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d7000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3b6000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010746000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010767000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130a7000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013086000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd7a000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9b000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdbc000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fddd000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110af000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.851984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.851993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.682 [2024-06-07 23:13:37.852006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001210e000 len:0x10000 key:0x182800 00:22:45.682 [2024-06-07 23:13:37.852022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ed000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012069000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012048000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012027000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.852199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012006000 len:0x10000 key:0x182800 00:22:45.683 [2024-06-07 23:13:37.852209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:3df0 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.854705] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:22:45.683 [2024-06-07 23:13:37.854753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.854775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.854806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.854827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.854853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.854875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.854908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.854929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.854956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.854976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183a00 00:22:45.683 [2024-06-07 23:13:37.855462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183700 00:22:45.683 [2024-06-07 23:13:37.855771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.683 [2024-06-07 23:13:37.855783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.855987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.855997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183700 00:22:45.684 [2024-06-07 23:13:37.856183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183500 00:22:45.684 [2024-06-07 23:13:37.856205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183300 00:22:45.684 [2024-06-07 23:13:37.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ba0000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012bc1000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012be2000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c66000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c87000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ca8000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cc9000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d6e000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.856589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d8f000 len:0x10000 key:0x182800 00:22:45.684 [2024-06-07 23:13:37.856599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e6c0 sqhd:6010 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.858355] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:22:45.684 [2024-06-07 23:13:37.858381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183500 00:22:45.684 [2024-06-07 23:13:37.858391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.684 [2024-06-07 23:13:37.858407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183500 00:22:45.685 [2024-06-07 23:13:37.858672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.858983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.858993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183f00 00:22:45.685 [2024-06-07 23:13:37.859255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.685 [2024-06-07 23:13:37.859268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183f00 00:22:45.686 [2024-06-07 23:13:37.859278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183f00 00:22:45.686 [2024-06-07 23:13:37.859302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183f00 00:22:45.686 [2024-06-07 23:13:37.859325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183f00 00:22:45.686 [2024-06-07 23:13:37.859348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183f00 00:22:45.686 [2024-06-07 23:13:37.859370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183f00 00:22:45.686 [2024-06-07 23:13:37.859392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184000 00:22:45.686 [2024-06-07 23:13:37.859822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.859835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183500 00:22:45.686 [2024-06-07 23:13:37.859845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40ef440 sqhd:85f0 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.862611] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:22:45.686 [2024-06-07 23:13:37.862641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.686 [2024-06-07 23:13:37.862713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.686 [2024-06-07 23:13:37.862728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.862739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.686 [2024-06-07 23:13:37.862750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.862761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.686 [2024-06-07 23:13:37.862771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.862782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.686 [2024-06-07 23:13:37.862792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.686 [2024-06-07 23:13:37.864853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.686 [2024-06-07 23:13:37.864869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:45.686 [2024-06-07 23:13:37.864879] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.864896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.864907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.864918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.864928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.864938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.864948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.864959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.864969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.866884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.866904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:45.687 [2024-06-07 23:13:37.866913] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.866930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.866940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.866951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.866961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.866972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.866981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.866992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.867001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.869050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.869065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:45.687 [2024-06-07 23:13:37.869074] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.869092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.869113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.869124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.869134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.869144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.869154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.869164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.871042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.871057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:45.687 [2024-06-07 23:13:37.871066] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.871082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.871092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.871103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.871116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.871127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.871136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.871147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.873037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.873052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:45.687 [2024-06-07 23:13:37.873061] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.873078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.873088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.873099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.873108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.873118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.873128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.873138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.873148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.875097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.875111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:45.687 [2024-06-07 23:13:37.875120] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.875136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.875146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.875156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.875166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.875182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.875202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.875215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.877018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.877032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:45.687 [2024-06-07 23:13:37.877041] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.877057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.877068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.877078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.877087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.877098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.877108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.877118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.877128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.878697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.687 [2024-06-07 23:13:37.878711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:45.687 [2024-06-07 23:13:37.878720] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.687 [2024-06-07 23:13:37.878736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.878746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.687 [2024-06-07 23:13:37.878756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.687 [2024-06-07 23:13:37.878766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.688 [2024-06-07 23:13:37.878785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.688 [2024-06-07 23:13:37.878795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.688 [2024-06-07 23:13:37.878805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.688 [2024-06-07 23:13:37.878814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:57583 cdw0:0 sqhd:5000 p:1 m:1 dnr:0 00:22:45.688 [2024-06-07 23:13:37.898071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:45.688 [2024-06-07 23:13:37.898087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:45.688 [2024-06-07 23:13:37.898094] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.905948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:45.688 [2024-06-07 23:13:37.905970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:45.688 [2024-06-07 23:13:37.905980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:45.688 [2024-06-07 23:13:37.906043] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.906058] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.906068] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.906077] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.906086] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.906094] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:45.688 [2024-06-07 23:13:37.906389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:45.688 [2024-06-07 23:13:37.906399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:45.688 [2024-06-07 23:13:37.906407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:45.688 task offset: 35840 on job bdev=Nvme1n1 fails 00:22:45.688 00:22:45.688 Latency(us) 00:22:45.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.688 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme1n1 ended in about 1.84 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme1n1 : 1.84 134.97 8.44 34.83 0.00 374634.06 8363.64 1126470.22 00:22:45.688 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme2n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme2n1 : 1.92 133.66 8.35 33.41 0.00 377451.47 54176.43 1190383.42 00:22:45.688 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme3n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme3n1 : 1.92 133.60 8.35 33.40 0.00 374445.01 11421.99 1182394.27 00:22:45.688 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme4n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme4n1 : 1.92 137.18 8.57 33.38 0.00 363628.65 4649.94 1166415.97 00:22:45.688 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme5n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme5n1 : 1.92 133.47 8.34 33.37 0.00 368895.51 22968.81 1158426.82 00:22:45.688 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme6n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme6n1 : 1.92 133.41 8.34 33.35 0.00 366010.37 25340.59 1150437.67 00:22:45.688 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme7n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme7n1 : 1.92 133.35 8.33 33.34 0.00 363019.90 36200.84 1142448.52 00:22:45.688 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme8n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme8n1 : 1.92 133.30 8.33 33.32 0.00 360150.36 41443.72 1134459.37 00:22:45.688 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme9n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme9n1 : 1.92 133.24 8.33 33.31 0.00 356775.25 42941.68 1126470.22 00:22:45.688 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.688 Job: Nvme10n1 ended in about 1.92 seconds with error 00:22:45.688 Verification LBA range: start 0x0 length 0x400 00:22:45.688 Nvme10n1 : 1.92 99.88 6.24 33.29 0.00 442246.10 54925.41 1110491.92 00:22:45.688 =================================================================================================================== 00:22:45.688 Total : 1306.07 81.63 335.02 0.00 373322.72 4649.94 1190383.42 00:22:45.688 [2024-06-07 23:13:37.929039] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:45.688 [2024-06-07 23:13:37.929061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:45.688 [2024-06-07 23:13:37.929072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:45.688 [2024-06-07 23:13:37.929081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:45.947 [2024-06-07 23:13:37.937614] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.937665] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.937672] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:22:45.947 [2024-06-07 23:13:37.938286] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.938297] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.938302] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:22:45.947 [2024-06-07 23:13:37.938401] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.938408] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.938413] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:22:45.947 [2024-06-07 23:13:37.938505] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.938513] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.938517] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:22:45.947 [2024-06-07 23:13:37.941796] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.941835] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.941853] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:22:45.947 [2024-06-07 23:13:37.941967] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.941991] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.942007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:22:45.947 [2024-06-07 23:13:37.942113] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.942129] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.942137] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:22:45.947 [2024-06-07 23:13:37.942781] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.942795] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.942803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:22:45.947 [2024-06-07 23:13:37.942886] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.947 [2024-06-07 23:13:37.942898] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.947 [2024-06-07 23:13:37.942906] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:22:45.948 [2024-06-07 23:13:37.942977] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.948 [2024-06-07 23:13:37.942988] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.948 [2024-06-07 23:13:37.942996] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 997441 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.948 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:45.948 rmmod nvme_rdma 00:22:45.948 rmmod nvme_fabrics 00:22:46.207 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 997441 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:46.207 00:22:46.207 real 0m5.046s 00:22:46.207 user 0m17.316s 00:22:46.207 sys 0m1.063s 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.207 ************************************ 00:22:46.207 END TEST nvmf_shutdown_tc3 00:22:46.207 ************************************ 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:46.207 00:22:46.207 real 0m23.440s 00:22:46.207 user 1m8.157s 00:22:46.207 sys 0m8.028s 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:46.207 23:13:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:46.207 ************************************ 00:22:46.207 END TEST nvmf_shutdown 00:22:46.207 ************************************ 00:22:46.207 23:13:38 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:46.207 23:13:38 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:46.207 23:13:38 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:46.207 23:13:38 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:46.207 23:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:46.207 ************************************ 00:22:46.207 START TEST nvmf_multicontroller 00:22:46.207 ************************************ 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:46.207 * Looking for test storage... 00:22:46.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.207 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:46.208 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:22:46.208 00:22:46.208 real 0m0.112s 00:22:46.208 user 0m0.049s 00:22:46.208 sys 0m0.070s 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:46.208 23:13:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:46.208 ************************************ 00:22:46.208 END TEST nvmf_multicontroller 00:22:46.208 ************************************ 00:22:46.466 23:13:38 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:46.466 23:13:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:46.466 23:13:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:46.466 23:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:46.466 ************************************ 00:22:46.466 START TEST nvmf_aer 00:22:46.466 ************************************ 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:46.466 * Looking for test storage... 00:22:46.466 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.466 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.467 23:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:53.028 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:53.028 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:53.028 Found net devices under 0000:da:00.0: mlx_0_0 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:53.028 Found net devices under 0000:da:00.1: mlx_0_1 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:53.028 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:53.029 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:53.029 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:53.029 altname enp218s0f0np0 00:22:53.029 altname ens818f0np0 00:22:53.029 inet 192.168.100.8/24 scope global mlx_0_0 00:22:53.029 valid_lft forever preferred_lft forever 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:53.029 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:53.029 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:53.029 altname enp218s0f1np1 00:22:53.029 altname ens818f1np1 00:22:53.029 inet 192.168.100.9/24 scope global mlx_0_1 00:22:53.029 valid_lft forever preferred_lft forever 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:53.029 192.168.100.9' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:53.029 192.168.100.9' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:53.029 192.168.100.9' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1001593 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1001593 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 1001593 ']' 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:53.029 23:13:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.029 [2024-06-07 23:13:44.647030] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:22:53.029 [2024-06-07 23:13:44.647076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.029 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.029 [2024-06-07 23:13:44.708956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.029 [2024-06-07 23:13:44.787699] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.029 [2024-06-07 23:13:44.787737] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.029 [2024-06-07 23:13:44.787744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.029 [2024-06-07 23:13:44.787749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.029 [2024-06-07 23:13:44.787754] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.029 [2024-06-07 23:13:44.787824] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.029 [2024-06-07 23:13:44.787918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.029 [2024-06-07 23:13:44.788005] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.029 [2024-06-07 23:13:44.788006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.334 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.334 [2024-06-07 23:13:45.508345] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b0e9d0/0x1b12ec0) succeed. 00:22:53.334 [2024-06-07 23:13:45.517534] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b10010/0x1b54550) succeed. 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.591 Malloc0 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.591 [2024-06-07 23:13:45.688721] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.591 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.591 [ 00:22:53.591 { 00:22:53.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:53.591 "subtype": "Discovery", 00:22:53.591 "listen_addresses": [], 00:22:53.591 "allow_any_host": true, 00:22:53.591 "hosts": [] 00:22:53.591 }, 00:22:53.591 { 00:22:53.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.591 "subtype": "NVMe", 00:22:53.591 "listen_addresses": [ 00:22:53.591 { 00:22:53.591 "trtype": "RDMA", 00:22:53.591 "adrfam": "IPv4", 00:22:53.591 "traddr": "192.168.100.8", 00:22:53.591 "trsvcid": "4420" 00:22:53.591 } 00:22:53.591 ], 00:22:53.591 "allow_any_host": true, 00:22:53.591 "hosts": [], 00:22:53.591 "serial_number": "SPDK00000000000001", 00:22:53.591 "model_number": "SPDK bdev Controller", 00:22:53.591 "max_namespaces": 2, 00:22:53.591 "min_cntlid": 1, 00:22:53.592 "max_cntlid": 65519, 00:22:53.592 "namespaces": [ 00:22:53.592 { 00:22:53.592 "nsid": 1, 00:22:53.592 "bdev_name": "Malloc0", 00:22:53.592 "name": "Malloc0", 00:22:53.592 "nguid": "D04ED1990A43474AA0EDE20657003BB0", 00:22:53.592 "uuid": "d04ed199-0a43-474a-a0ed-e20657003bb0" 00:22:53.592 } 00:22:53.592 ] 00:22:53.592 } 00:22:53.592 ] 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=1001670 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:22:53.592 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:22:53.592 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.850 Malloc1 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.850 [ 00:22:53.850 { 00:22:53.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:53.850 "subtype": "Discovery", 00:22:53.850 "listen_addresses": [], 00:22:53.850 "allow_any_host": true, 00:22:53.850 "hosts": [] 00:22:53.850 }, 00:22:53.850 { 00:22:53.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.850 "subtype": "NVMe", 00:22:53.850 "listen_addresses": [ 00:22:53.850 { 00:22:53.850 "trtype": "RDMA", 00:22:53.850 "adrfam": "IPv4", 00:22:53.850 "traddr": "192.168.100.8", 00:22:53.850 "trsvcid": "4420" 00:22:53.850 } 00:22:53.850 ], 00:22:53.850 "allow_any_host": true, 00:22:53.850 "hosts": [], 00:22:53.850 "serial_number": "SPDK00000000000001", 00:22:53.850 "model_number": "SPDK bdev Controller", 00:22:53.850 "max_namespaces": 2, 00:22:53.850 "min_cntlid": 1, 00:22:53.850 "max_cntlid": 65519, 00:22:53.850 "namespaces": [ 00:22:53.850 { 00:22:53.850 "nsid": 1, 00:22:53.850 "bdev_name": "Malloc0", 00:22:53.850 "name": "Malloc0", 00:22:53.850 "nguid": "D04ED1990A43474AA0EDE20657003BB0", 00:22:53.850 "uuid": "d04ed199-0a43-474a-a0ed-e20657003bb0" 00:22:53.850 }, 00:22:53.850 { 00:22:53.850 "nsid": 2, 00:22:53.850 "bdev_name": "Malloc1", 00:22:53.850 "name": "Malloc1", 00:22:53.850 "nguid": "AF2727E295A94A64BFEE7355E91C0E62", 00:22:53.850 "uuid": "af2727e2-95a9-4a64-bfee-7355e91c0e62" 00:22:53.850 } 00:22:53.850 ] 00:22:53.850 } 00:22:53.850 ] 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.850 23:13:45 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 1001670 00:22:53.850 Asynchronous Event Request test 00:22:53.850 Attaching to 192.168.100.8 00:22:53.850 Attached to 192.168.100.8 00:22:53.850 Registering asynchronous event callbacks... 00:22:53.850 Starting namespace attribute notice tests for all controllers... 00:22:53.850 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:53.850 aer_cb - Changed Namespace 00:22:53.850 Cleaning up... 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.850 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:53.850 rmmod nvme_rdma 00:22:53.850 rmmod nvme_fabrics 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1001593 ']' 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1001593 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 1001593 ']' 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 1001593 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1001593 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1001593' 00:22:54.108 killing process with pid 1001593 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@968 -- # kill 1001593 00:22:54.108 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@973 -- # wait 1001593 00:22:54.366 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.366 23:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:54.366 00:22:54.366 real 0m7.902s 00:22:54.366 user 0m8.285s 00:22:54.366 sys 0m4.886s 00:22:54.366 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:54.366 23:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.366 ************************************ 00:22:54.366 END TEST nvmf_aer 00:22:54.366 ************************************ 00:22:54.366 23:13:46 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:54.366 23:13:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:54.366 23:13:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:54.366 23:13:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:54.366 ************************************ 00:22:54.366 START TEST nvmf_async_init 00:22:54.366 ************************************ 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:54.366 * Looking for test storage... 00:22:54.366 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.366 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6b223aaf7a0b4ca488a7f2e6a39b40c4 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.367 23:13:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:00.940 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:00.940 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:00.940 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:00.941 Found net devices under 0000:da:00.0: mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:00.941 Found net devices under 0000:da:00.1: mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:00.941 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:00.941 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:00.941 altname enp218s0f0np0 00:23:00.941 altname ens818f0np0 00:23:00.941 inet 192.168.100.8/24 scope global mlx_0_0 00:23:00.941 valid_lft forever preferred_lft forever 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:00.941 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:00.941 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:00.941 altname enp218s0f1np1 00:23:00.941 altname ens818f1np1 00:23:00.941 inet 192.168.100.9/24 scope global mlx_0_1 00:23:00.941 valid_lft forever preferred_lft forever 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:00.941 192.168.100.9' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:00.941 192.168.100.9' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:00.941 192.168.100.9' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1005273 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1005273 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 1005273 ']' 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:00.941 23:13:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.941 [2024-06-07 23:13:52.849694] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:00.941 [2024-06-07 23:13:52.849742] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.941 [2024-06-07 23:13:52.909625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.941 [2024-06-07 23:13:52.988997] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.941 [2024-06-07 23:13:52.989039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.941 [2024-06-07 23:13:52.989046] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.941 [2024-06-07 23:13:52.989052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.941 [2024-06-07 23:13:52.989056] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.941 [2024-06-07 23:13:52.989095] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.509 [2024-06-07 23:13:53.711993] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x233f830/0x2343d20) succeed. 00:23:01.509 [2024-06-07 23:13:53.720625] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2340d30/0x23853b0) succeed. 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.509 null0 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.509 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6b223aaf7a0b4ca488a7f2e6a39b40c4 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 [2024-06-07 23:13:53.813342] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 nvme0n1 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.767 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.767 [ 00:23:01.767 { 00:23:01.767 "name": "nvme0n1", 00:23:01.767 "aliases": [ 00:23:01.767 "6b223aaf-7a0b-4ca4-88a7-f2e6a39b40c4" 00:23:01.767 ], 00:23:01.767 "product_name": "NVMe disk", 00:23:01.767 "block_size": 512, 00:23:01.767 "num_blocks": 2097152, 00:23:01.767 "uuid": "6b223aaf-7a0b-4ca4-88a7-f2e6a39b40c4", 00:23:01.767 "assigned_rate_limits": { 00:23:01.767 "rw_ios_per_sec": 0, 00:23:01.767 "rw_mbytes_per_sec": 0, 00:23:01.767 "r_mbytes_per_sec": 0, 00:23:01.767 "w_mbytes_per_sec": 0 00:23:01.767 }, 00:23:01.767 "claimed": false, 00:23:01.767 "zoned": false, 00:23:01.767 "supported_io_types": { 00:23:01.767 "read": true, 00:23:01.767 "write": true, 00:23:01.767 "unmap": false, 00:23:01.767 "write_zeroes": true, 00:23:01.767 "flush": true, 00:23:01.767 "reset": true, 00:23:01.767 "compare": true, 00:23:01.767 "compare_and_write": true, 00:23:01.767 "abort": true, 00:23:01.767 "nvme_admin": true, 00:23:01.767 "nvme_io": true 00:23:01.767 }, 00:23:01.767 "memory_domains": [ 00:23:01.767 { 00:23:01.767 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:01.767 "dma_device_type": 0 00:23:01.767 } 00:23:01.768 ], 00:23:01.768 "driver_specific": { 00:23:01.768 "nvme": [ 00:23:01.768 { 00:23:01.768 "trid": { 00:23:01.768 "trtype": "RDMA", 00:23:01.768 "adrfam": "IPv4", 00:23:01.768 "traddr": "192.168.100.8", 00:23:01.768 "trsvcid": "4420", 00:23:01.768 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.768 }, 00:23:01.768 "ctrlr_data": { 00:23:01.768 "cntlid": 1, 00:23:01.768 "vendor_id": "0x8086", 00:23:01.768 "model_number": "SPDK bdev Controller", 00:23:01.768 "serial_number": "00000000000000000000", 00:23:01.768 "firmware_revision": "24.09", 00:23:01.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.768 "oacs": { 00:23:01.768 "security": 0, 00:23:01.768 "format": 0, 00:23:01.768 "firmware": 0, 00:23:01.768 "ns_manage": 0 00:23:01.768 }, 00:23:01.768 "multi_ctrlr": true, 00:23:01.768 "ana_reporting": false 00:23:01.768 }, 00:23:01.768 "vs": { 00:23:01.768 "nvme_version": "1.3" 00:23:01.768 }, 00:23:01.768 "ns_data": { 00:23:01.768 "id": 1, 00:23:01.768 "can_share": true 00:23:01.768 } 00:23:01.768 } 00:23:01.768 ], 00:23:01.768 "mp_policy": "active_passive" 00:23:01.768 } 00:23:01.768 } 00:23:01.768 ] 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.768 [2024-06-07 23:13:53.920331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.768 [2024-06-07 23:13:53.946361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.768 [2024-06-07 23:13:53.971800] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.768 [ 00:23:01.768 { 00:23:01.768 "name": "nvme0n1", 00:23:01.768 "aliases": [ 00:23:01.768 "6b223aaf-7a0b-4ca4-88a7-f2e6a39b40c4" 00:23:01.768 ], 00:23:01.768 "product_name": "NVMe disk", 00:23:01.768 "block_size": 512, 00:23:01.768 "num_blocks": 2097152, 00:23:01.768 "uuid": "6b223aaf-7a0b-4ca4-88a7-f2e6a39b40c4", 00:23:01.768 "assigned_rate_limits": { 00:23:01.768 "rw_ios_per_sec": 0, 00:23:01.768 "rw_mbytes_per_sec": 0, 00:23:01.768 "r_mbytes_per_sec": 0, 00:23:01.768 "w_mbytes_per_sec": 0 00:23:01.768 }, 00:23:01.768 "claimed": false, 00:23:01.768 "zoned": false, 00:23:01.768 "supported_io_types": { 00:23:01.768 "read": true, 00:23:01.768 "write": true, 00:23:01.768 "unmap": false, 00:23:01.768 "write_zeroes": true, 00:23:01.768 "flush": true, 00:23:01.768 "reset": true, 00:23:01.768 "compare": true, 00:23:01.768 "compare_and_write": true, 00:23:01.768 "abort": true, 00:23:01.768 "nvme_admin": true, 00:23:01.768 "nvme_io": true 00:23:01.768 }, 00:23:01.768 "memory_domains": [ 00:23:01.768 { 00:23:01.768 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:01.768 "dma_device_type": 0 00:23:01.768 } 00:23:01.768 ], 00:23:01.768 "driver_specific": { 00:23:01.768 "nvme": [ 00:23:01.768 { 00:23:01.768 "trid": { 00:23:01.768 "trtype": "RDMA", 00:23:01.768 "adrfam": "IPv4", 00:23:01.768 "traddr": "192.168.100.8", 00:23:01.768 "trsvcid": "4420", 00:23:01.768 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.768 }, 00:23:01.768 "ctrlr_data": { 00:23:01.768 "cntlid": 2, 00:23:01.768 "vendor_id": "0x8086", 00:23:01.768 "model_number": "SPDK bdev Controller", 00:23:01.768 "serial_number": "00000000000000000000", 00:23:01.768 "firmware_revision": "24.09", 00:23:01.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.768 "oacs": { 00:23:01.768 "security": 0, 00:23:01.768 "format": 0, 00:23:01.768 "firmware": 0, 00:23:01.768 "ns_manage": 0 00:23:01.768 }, 00:23:01.768 "multi_ctrlr": true, 00:23:01.768 "ana_reporting": false 00:23:01.768 }, 00:23:01.768 "vs": { 00:23:01.768 "nvme_version": "1.3" 00:23:01.768 }, 00:23:01.768 "ns_data": { 00:23:01.768 "id": 1, 00:23:01.768 "can_share": true 00:23:01.768 } 00:23:01.768 } 00:23:01.768 ], 00:23:01.768 "mp_policy": "active_passive" 00:23:01.768 } 00:23:01.768 } 00:23:01.768 ] 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.768 23:13:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6vV3PrqObm 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6vV3PrqObm 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.768 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.768 [2024-06-07 23:13:54.042797] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6vV3PrqObm 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6vV3PrqObm 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.027 [2024-06-07 23:13:54.062836] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.027 nvme0n1 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.027 [ 00:23:02.027 { 00:23:02.027 "name": "nvme0n1", 00:23:02.027 "aliases": [ 00:23:02.027 "6b223aaf-7a0b-4ca4-88a7-f2e6a39b40c4" 00:23:02.027 ], 00:23:02.027 "product_name": "NVMe disk", 00:23:02.027 "block_size": 512, 00:23:02.027 "num_blocks": 2097152, 00:23:02.027 "uuid": "6b223aaf-7a0b-4ca4-88a7-f2e6a39b40c4", 00:23:02.027 "assigned_rate_limits": { 00:23:02.027 "rw_ios_per_sec": 0, 00:23:02.027 "rw_mbytes_per_sec": 0, 00:23:02.027 "r_mbytes_per_sec": 0, 00:23:02.027 "w_mbytes_per_sec": 0 00:23:02.027 }, 00:23:02.027 "claimed": false, 00:23:02.027 "zoned": false, 00:23:02.027 "supported_io_types": { 00:23:02.027 "read": true, 00:23:02.027 "write": true, 00:23:02.027 "unmap": false, 00:23:02.027 "write_zeroes": true, 00:23:02.027 "flush": true, 00:23:02.027 "reset": true, 00:23:02.027 "compare": true, 00:23:02.027 "compare_and_write": true, 00:23:02.027 "abort": true, 00:23:02.027 "nvme_admin": true, 00:23:02.027 "nvme_io": true 00:23:02.027 }, 00:23:02.027 "memory_domains": [ 00:23:02.027 { 00:23:02.027 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:02.027 "dma_device_type": 0 00:23:02.027 } 00:23:02.027 ], 00:23:02.027 "driver_specific": { 00:23:02.027 "nvme": [ 00:23:02.027 { 00:23:02.027 "trid": { 00:23:02.027 "trtype": "RDMA", 00:23:02.027 "adrfam": "IPv4", 00:23:02.027 "traddr": "192.168.100.8", 00:23:02.027 "trsvcid": "4421", 00:23:02.027 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:02.027 }, 00:23:02.027 "ctrlr_data": { 00:23:02.027 "cntlid": 3, 00:23:02.027 "vendor_id": "0x8086", 00:23:02.027 "model_number": "SPDK bdev Controller", 00:23:02.027 "serial_number": "00000000000000000000", 00:23:02.027 "firmware_revision": "24.09", 00:23:02.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:02.027 "oacs": { 00:23:02.027 "security": 0, 00:23:02.027 "format": 0, 00:23:02.027 "firmware": 0, 00:23:02.027 "ns_manage": 0 00:23:02.027 }, 00:23:02.027 "multi_ctrlr": true, 00:23:02.027 "ana_reporting": false 00:23:02.027 }, 00:23:02.027 "vs": { 00:23:02.027 "nvme_version": "1.3" 00:23:02.027 }, 00:23:02.027 "ns_data": { 00:23:02.027 "id": 1, 00:23:02.027 "can_share": true 00:23:02.027 } 00:23:02.027 } 00:23:02.027 ], 00:23:02.027 "mp_policy": "active_passive" 00:23:02.027 } 00:23:02.027 } 00:23:02.027 ] 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6vV3PrqObm 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:02.027 rmmod nvme_rdma 00:23:02.027 rmmod nvme_fabrics 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1005273 ']' 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1005273 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 1005273 ']' 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 1005273 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1005273 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1005273' 00:23:02.027 killing process with pid 1005273 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 1005273 00:23:02.027 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 1005273 00:23:02.286 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.286 23:13:54 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:02.286 00:23:02.286 real 0m7.992s 00:23:02.286 user 0m3.547s 00:23:02.286 sys 0m5.070s 00:23:02.286 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:02.286 23:13:54 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.286 ************************************ 00:23:02.286 END TEST nvmf_async_init 00:23:02.286 ************************************ 00:23:02.286 23:13:54 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:02.286 23:13:54 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:02.286 23:13:54 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:02.286 23:13:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:02.544 ************************************ 00:23:02.544 START TEST dma 00:23:02.544 ************************************ 00:23:02.544 23:13:54 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:02.544 * Looking for test storage... 00:23:02.544 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:02.544 23:13:54 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.544 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:02.545 23:13:54 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.545 23:13:54 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.545 23:13:54 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.545 23:13:54 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.545 23:13:54 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.545 23:13:54 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.545 23:13:54 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:23:02.545 23:13:54 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.545 23:13:54 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:23:02.545 23:13:54 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:23:02.545 23:13:54 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:23:02.545 23:13:54 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:23:02.545 23:13:54 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.545 23:13:54 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.545 23:13:54 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.545 23:13:54 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.545 23:13:54 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:09.108 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:09.108 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:09.108 Found net devices under 0000:da:00.0: mlx_0_0 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:09.108 Found net devices under 0000:da:00.1: mlx_0_1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:09.108 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:09.108 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:09.108 altname enp218s0f0np0 00:23:09.108 altname ens818f0np0 00:23:09.108 inet 192.168.100.8/24 scope global mlx_0_0 00:23:09.108 valid_lft forever preferred_lft forever 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:09.108 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:09.108 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:09.109 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:09.109 altname enp218s0f1np1 00:23:09.109 altname ens818f1np1 00:23:09.109 inet 192.168.100.9/24 scope global mlx_0_1 00:23:09.109 valid_lft forever preferred_lft forever 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:09.109 192.168.100.9' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:09.109 192.168.100.9' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:09.109 192.168.100.9' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:09.109 23:14:00 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=1009023 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 1009023 00:23:09.109 23:14:00 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@830 -- # '[' -z 1009023 ']' 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:09.109 23:14:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.109 [2024-06-07 23:14:00.974816] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:09.109 [2024-06-07 23:14:00.974863] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.109 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.109 [2024-06-07 23:14:01.034067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.109 [2024-06-07 23:14:01.111231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.109 [2024-06-07 23:14:01.111272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.109 [2024-06-07 23:14:01.111279] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.109 [2024-06-07 23:14:01.111285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.109 [2024-06-07 23:14:01.111289] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.109 [2024-06-07 23:14:01.111334] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.109 [2024-06-07 23:14:01.111337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@863 -- # return 0 00:23:09.675 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.675 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.675 23:14:01 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.675 [2024-06-07 23:14:01.818683] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2442360/0x2446850) succeed. 00:23:09.675 [2024-06-07 23:14:01.827475] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2443860/0x2487ee0) succeed. 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.675 23:14:01 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.675 Malloc0 00:23:09.675 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.933 23:14:01 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.933 23:14:01 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.933 23:14:01 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:09.933 [2024-06-07 23:14:01.971786] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:09.933 23:14:01 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.933 23:14:01 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:23:09.933 23:14:01 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.933 { 00:23:09.933 "params": { 00:23:09.933 "name": "Nvme$subsystem", 00:23:09.933 "trtype": "$TEST_TRANSPORT", 00:23:09.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.933 "adrfam": "ipv4", 00:23:09.933 "trsvcid": "$NVMF_PORT", 00:23:09.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.933 "hdgst": ${hdgst:-false}, 00:23:09.933 "ddgst": ${ddgst:-false} 00:23:09.933 }, 00:23:09.933 "method": "bdev_nvme_attach_controller" 00:23:09.933 } 00:23:09.933 EOF 00:23:09.933 )") 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:23:09.933 23:14:01 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.933 "params": { 00:23:09.933 "name": "Nvme0", 00:23:09.933 "trtype": "rdma", 00:23:09.933 "traddr": "192.168.100.8", 00:23:09.933 "adrfam": "ipv4", 00:23:09.933 "trsvcid": "4420", 00:23:09.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:09.933 "hdgst": false, 00:23:09.933 "ddgst": false 00:23:09.933 }, 00:23:09.933 "method": "bdev_nvme_attach_controller" 00:23:09.933 }' 00:23:09.933 [2024-06-07 23:14:02.018344] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:09.933 [2024-06-07 23:14:02.018387] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009267 ] 00:23:09.933 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.933 [2024-06-07 23:14:02.072123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.933 [2024-06-07 23:14:02.144791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.933 [2024-06-07 23:14:02.144793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.485 bdev Nvme0n1 reports 1 memory domains 00:23:16.485 bdev Nvme0n1 supports RDMA memory domain 00:23:16.485 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:16.485 ========================================================================== 00:23:16.485 Latency [us] 00:23:16.485 IOPS MiB/s Average min max 00:23:16.485 Core 2: 21736.27 84.91 735.39 244.81 8321.00 00:23:16.485 Core 3: 21789.06 85.11 733.61 254.78 8417.94 00:23:16.485 ========================================================================== 00:23:16.485 Total : 43525.34 170.02 734.50 244.81 8417.94 00:23:16.485 00:23:16.485 Total operations: 217684, translate 217684 pull_push 0 memzero 0 00:23:16.485 23:14:07 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:23:16.485 23:14:07 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:23:16.485 23:14:07 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:23:16.485 [2024-06-07 23:14:07.580128] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:16.485 [2024-06-07 23:14:07.580179] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010189 ] 00:23:16.485 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.485 [2024-06-07 23:14:07.634111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:16.485 [2024-06-07 23:14:07.705839] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.485 [2024-06-07 23:14:07.705842] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.736 bdev Malloc0 reports 2 memory domains 00:23:21.736 bdev Malloc0 doesn't support RDMA memory domain 00:23:21.736 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:21.736 ========================================================================== 00:23:21.736 Latency [us] 00:23:21.736 IOPS MiB/s Average min max 00:23:21.736 Core 2: 14562.41 56.88 1097.91 431.34 1927.56 00:23:21.736 Core 3: 14514.83 56.70 1101.53 393.59 1997.02 00:23:21.736 ========================================================================== 00:23:21.736 Total : 29077.24 113.58 1099.72 393.59 1997.02 00:23:21.736 00:23:21.736 Total operations: 145450, translate 0 pull_push 581800 memzero 0 00:23:21.736 23:14:13 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:23:21.736 23:14:13 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:23:21.736 23:14:13 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:21.736 23:14:13 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:23:21.736 Ignoring -M option 00:23:21.736 [2024-06-07 23:14:13.053248] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:21.736 [2024-06-07 23:14:13.053302] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011101 ] 00:23:21.736 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.736 [2024-06-07 23:14:13.107867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.736 [2024-06-07 23:14:13.176938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.736 [2024-06-07 23:14:13.176941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.001 bdev b2ae28bd-7c10-4b25-bd82-fa2fbad983ad reports 1 memory domains 00:23:27.001 bdev b2ae28bd-7c10-4b25-bd82-fa2fbad983ad supports RDMA memory domain 00:23:27.001 Initialization complete, running randread IO for 5 sec on 2 cores 00:23:27.001 ========================================================================== 00:23:27.001 Latency [us] 00:23:27.001 IOPS MiB/s Average min max 00:23:27.001 Core 2: 79971.84 312.39 199.32 74.63 2750.47 00:23:27.001 Core 3: 81826.84 319.64 194.73 74.08 1358.91 00:23:27.001 ========================================================================== 00:23:27.001 Total : 161798.68 632.03 197.00 74.08 2750.47 00:23:27.001 00:23:27.001 Total operations: 809084, translate 0 pull_push 0 memzero 809084 00:23:27.001 23:14:18 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:23:27.001 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.001 [2024-06-07 23:14:18.709571] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:28.925 Initializing NVMe Controllers 00:23:28.925 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:23:28.925 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:28.925 Initialization complete. Launching workers. 00:23:28.925 ======================================================== 00:23:28.925 Latency(us) 00:23:28.925 Device Information : IOPS MiB/s Average min max 00:23:28.925 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.10 5146.50 9815.34 00:23:28.925 ======================================================== 00:23:28.925 Total : 2016.00 7.88 7972.10 5146.50 9815.34 00:23:28.925 00:23:28.925 23:14:21 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:23:28.925 23:14:21 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:23:28.925 23:14:21 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:28.925 23:14:21 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:23:28.925 [2024-06-07 23:14:21.051423] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:28.925 [2024-06-07 23:14:21.051467] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012262 ] 00:23:28.925 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.925 [2024-06-07 23:14:21.105988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:28.925 [2024-06-07 23:14:21.177972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.925 [2024-06-07 23:14:21.177974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.556 bdev e3fec07f-c4ed-430d-a141-711d373647ac reports 1 memory domains 00:23:35.556 bdev e3fec07f-c4ed-430d-a141-711d373647ac supports RDMA memory domain 00:23:35.556 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:35.556 ========================================================================== 00:23:35.556 Latency [us] 00:23:35.556 IOPS MiB/s Average min max 00:23:35.556 Core 2: 18878.18 73.74 846.79 49.42 12872.49 00:23:35.556 Core 3: 19174.72 74.90 833.69 12.38 12569.93 00:23:35.556 ========================================================================== 00:23:35.556 Total : 38052.90 148.64 840.19 12.38 12872.49 00:23:35.556 00:23:35.556 Total operations: 190303, translate 190200 pull_push 0 memzero 103 00:23:35.556 23:14:26 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:35.556 23:14:26 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:35.556 rmmod nvme_rdma 00:23:35.556 rmmod nvme_fabrics 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 1009023 ']' 00:23:35.556 23:14:26 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 1009023 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@949 -- # '[' -z 1009023 ']' 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # kill -0 1009023 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # uname 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1009023 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1009023' 00:23:35.556 killing process with pid 1009023 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@968 -- # kill 1009023 00:23:35.556 23:14:26 nvmf_rdma.dma -- common/autotest_common.sh@973 -- # wait 1009023 00:23:35.556 23:14:27 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.556 23:14:27 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:35.556 00:23:35.556 real 0m32.459s 00:23:35.556 user 1m36.380s 00:23:35.556 sys 0m5.740s 00:23:35.557 23:14:27 nvmf_rdma.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:35.557 23:14:27 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:35.557 ************************************ 00:23:35.557 END TEST dma 00:23:35.557 ************************************ 00:23:35.557 23:14:27 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:35.557 23:14:27 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:35.557 23:14:27 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:35.557 23:14:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:35.557 ************************************ 00:23:35.557 START TEST nvmf_identify 00:23:35.557 ************************************ 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:35.557 * Looking for test storage... 00:23:35.557 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.557 23:14:27 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:40.825 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:40.825 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:40.825 Found net devices under 0000:da:00.0: mlx_0_0 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:40.825 Found net devices under 0000:da:00.1: mlx_0_1 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:40.825 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:40.826 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:40.826 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:40.826 altname enp218s0f0np0 00:23:40.826 altname ens818f0np0 00:23:40.826 inet 192.168.100.8/24 scope global mlx_0_0 00:23:40.826 valid_lft forever preferred_lft forever 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:40.826 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:40.826 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:40.826 altname enp218s0f1np1 00:23:40.826 altname ens818f1np1 00:23:40.826 inet 192.168.100.9/24 scope global mlx_0_1 00:23:40.826 valid_lft forever preferred_lft forever 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:40.826 23:14:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:40.826 192.168.100.9' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:40.826 192.168.100.9' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:40.826 192.168.100.9' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1016733 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1016733 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 1016733 ']' 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:40.826 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.826 [2024-06-07 23:14:33.094625] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:40.826 [2024-06-07 23:14:33.094670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.085 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.085 [2024-06-07 23:14:33.155559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.085 [2024-06-07 23:14:33.231407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.085 [2024-06-07 23:14:33.231446] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.085 [2024-06-07 23:14:33.231452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.085 [2024-06-07 23:14:33.231460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.085 [2024-06-07 23:14:33.231464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.085 [2024-06-07 23:14:33.231527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.085 [2024-06-07 23:14:33.231652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.085 [2024-06-07 23:14:33.231743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.085 [2024-06-07 23:14:33.231744] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.650 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:41.650 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:23:41.650 23:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:41.650 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.650 23:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 [2024-06-07 23:14:33.935833] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc69d0/0xdcaec0) succeed. 00:23:41.908 [2024-06-07 23:14:33.945975] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc8010/0xe0c550) succeed. 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 Malloc0 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 [2024-06-07 23:14:34.148504] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.908 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:41.908 [ 00:23:41.908 { 00:23:41.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:41.908 "subtype": "Discovery", 00:23:41.908 "listen_addresses": [ 00:23:41.908 { 00:23:41.908 "trtype": "RDMA", 00:23:41.908 "adrfam": "IPv4", 00:23:41.908 "traddr": "192.168.100.8", 00:23:41.908 "trsvcid": "4420" 00:23:41.908 } 00:23:41.908 ], 00:23:41.908 "allow_any_host": true, 00:23:41.908 "hosts": [] 00:23:41.908 }, 00:23:41.908 { 00:23:41.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.908 "subtype": "NVMe", 00:23:41.908 "listen_addresses": [ 00:23:41.908 { 00:23:41.908 "trtype": "RDMA", 00:23:41.908 "adrfam": "IPv4", 00:23:41.908 "traddr": "192.168.100.8", 00:23:41.908 "trsvcid": "4420" 00:23:41.908 } 00:23:41.908 ], 00:23:41.908 "allow_any_host": true, 00:23:41.908 "hosts": [], 00:23:41.909 "serial_number": "SPDK00000000000001", 00:23:41.909 "model_number": "SPDK bdev Controller", 00:23:41.909 "max_namespaces": 32, 00:23:41.909 "min_cntlid": 1, 00:23:41.909 "max_cntlid": 65519, 00:23:41.909 "namespaces": [ 00:23:41.909 { 00:23:41.909 "nsid": 1, 00:23:41.909 "bdev_name": "Malloc0", 00:23:41.909 "name": "Malloc0", 00:23:41.909 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:41.909 "eui64": "ABCDEF0123456789", 00:23:41.909 "uuid": "be1665d3-385b-4095-ae1c-0b5fb9acad90" 00:23:41.909 } 00:23:41.909 ] 00:23:41.909 } 00:23:41.909 ] 00:23:41.909 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.909 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:42.174 [2024-06-07 23:14:34.199846] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:42.174 [2024-06-07 23:14:34.199893] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016980 ] 00:23:42.174 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.174 [2024-06-07 23:14:34.241932] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:42.174 [2024-06-07 23:14:34.242075] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:42.174 [2024-06-07 23:14:34.242093] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:42.174 [2024-06-07 23:14:34.242097] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:42.174 [2024-06-07 23:14:34.242123] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:42.175 [2024-06-07 23:14:34.260530] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:42.175 [2024-06-07 23:14:34.275314] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:42.175 [2024-06-07 23:14:34.275333] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:42.175 [2024-06-07 23:14:34.275339] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275344] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275349] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275353] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275357] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275361] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275365] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275369] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275373] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275378] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275382] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275386] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275393] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275397] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275401] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275405] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275409] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275413] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275418] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275421] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275425] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275430] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275434] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275438] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275442] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275446] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275450] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275454] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275458] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275462] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275466] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275470] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:42.175 [2024-06-07 23:14:34.275475] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:42.175 [2024-06-07 23:14:34.275477] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:42.175 [2024-06-07 23:14:34.275496] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.275510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183600 00:23:42.175 [2024-06-07 23:14:34.281017] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281033] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281039] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:42.175 [2024-06-07 23:14:34.281044] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:42.175 [2024-06-07 23:14:34.281049] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:42.175 [2024-06-07 23:14:34.281060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.175 [2024-06-07 23:14:34.281096] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281105] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:42.175 [2024-06-07 23:14:34.281109] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281113] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:42.175 [2024-06-07 23:14:34.281119] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.175 [2024-06-07 23:14:34.281148] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281157] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:42.175 [2024-06-07 23:14:34.281161] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:42.175 [2024-06-07 23:14:34.281172] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.175 [2024-06-07 23:14:34.281200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281209] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:42.175 [2024-06-07 23:14:34.281213] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281220] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.175 [2024-06-07 23:14:34.281244] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281253] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:42.175 [2024-06-07 23:14:34.281257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:42.175 [2024-06-07 23:14:34.281261] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281265] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:42.175 [2024-06-07 23:14:34.281370] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:42.175 [2024-06-07 23:14:34.281378] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:42.175 [2024-06-07 23:14:34.281387] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.175 [2024-06-07 23:14:34.281418] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281426] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:42.175 [2024-06-07 23:14:34.281430] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281436] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.175 [2024-06-07 23:14:34.281442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.175 [2024-06-07 23:14:34.281459] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.175 [2024-06-07 23:14:34.281463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:42.175 [2024-06-07 23:14:34.281467] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:42.175 [2024-06-07 23:14:34.281471] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:42.175 [2024-06-07 23:14:34.281475] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281480] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:42.176 [2024-06-07 23:14:34.281487] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:42.176 [2024-06-07 23:14:34.281494] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183600 00:23:42.176 [2024-06-07 23:14:34.281539] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281550] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:42.176 [2024-06-07 23:14:34.281555] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:42.176 [2024-06-07 23:14:34.281558] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:42.176 [2024-06-07 23:14:34.281563] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:42.176 [2024-06-07 23:14:34.281566] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:42.176 [2024-06-07 23:14:34.281570] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:42.176 [2024-06-07 23:14:34.281574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281583] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:42.176 [2024-06-07 23:14:34.281591] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.176 [2024-06-07 23:14:34.281621] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281634] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.176 [2024-06-07 23:14:34.281644] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.176 [2024-06-07 23:14:34.281654] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.176 [2024-06-07 23:14:34.281664] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.176 [2024-06-07 23:14:34.281673] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:42.176 [2024-06-07 23:14:34.281677] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281683] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:42.176 [2024-06-07 23:14:34.281689] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.176 [2024-06-07 23:14:34.281710] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281719] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:42.176 [2024-06-07 23:14:34.281725] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:42.176 [2024-06-07 23:14:34.281729] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281736] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183600 00:23:42.176 [2024-06-07 23:14:34.281764] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281776] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281784] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:42.176 [2024-06-07 23:14:34.281804] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183600 00:23:42.176 [2024-06-07 23:14:34.281817] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.176 [2024-06-07 23:14:34.281843] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281856] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183600 00:23:42.176 [2024-06-07 23:14:34.281866] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281870] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281878] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281890] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281902] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183600 00:23:42.176 [2024-06-07 23:14:34.281912] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.176 [2024-06-07 23:14:34.281929] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.176 [2024-06-07 23:14:34.281933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:42.176 [2024-06-07 23:14:34.281941] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.176 ===================================================== 00:23:42.176 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:42.176 ===================================================== 00:23:42.176 Controller Capabilities/Features 00:23:42.176 ================================ 00:23:42.176 Vendor ID: 0000 00:23:42.176 Subsystem Vendor ID: 0000 00:23:42.176 Serial Number: .................... 00:23:42.176 Model Number: ........................................ 00:23:42.176 Firmware Version: 24.09 00:23:42.176 Recommended Arb Burst: 0 00:23:42.176 IEEE OUI Identifier: 00 00 00 00:23:42.176 Multi-path I/O 00:23:42.176 May have multiple subsystem ports: No 00:23:42.176 May have multiple controllers: No 00:23:42.176 Associated with SR-IOV VF: No 00:23:42.176 Max Data Transfer Size: 131072 00:23:42.176 Max Number of Namespaces: 0 00:23:42.176 Max Number of I/O Queues: 1024 00:23:42.176 NVMe Specification Version (VS): 1.3 00:23:42.176 NVMe Specification Version (Identify): 1.3 00:23:42.176 Maximum Queue Entries: 128 00:23:42.176 Contiguous Queues Required: Yes 00:23:42.176 Arbitration Mechanisms Supported 00:23:42.176 Weighted Round Robin: Not Supported 00:23:42.176 Vendor Specific: Not Supported 00:23:42.176 Reset Timeout: 15000 ms 00:23:42.176 Doorbell Stride: 4 bytes 00:23:42.176 NVM Subsystem Reset: Not Supported 00:23:42.176 Command Sets Supported 00:23:42.176 NVM Command Set: Supported 00:23:42.176 Boot Partition: Not Supported 00:23:42.176 Memory Page Size Minimum: 4096 bytes 00:23:42.176 Memory Page Size Maximum: 4096 bytes 00:23:42.176 Persistent Memory Region: Not Supported 00:23:42.176 Optional Asynchronous Events Supported 00:23:42.176 Namespace Attribute Notices: Not Supported 00:23:42.176 Firmware Activation Notices: Not Supported 00:23:42.176 ANA Change Notices: Not Supported 00:23:42.176 PLE Aggregate Log Change Notices: Not Supported 00:23:42.176 LBA Status Info Alert Notices: Not Supported 00:23:42.176 EGE Aggregate Log Change Notices: Not Supported 00:23:42.176 Normal NVM Subsystem Shutdown event: Not Supported 00:23:42.176 Zone Descriptor Change Notices: Not Supported 00:23:42.176 Discovery Log Change Notices: Supported 00:23:42.176 Controller Attributes 00:23:42.176 128-bit Host Identifier: Not Supported 00:23:42.176 Non-Operational Permissive Mode: Not Supported 00:23:42.176 NVM Sets: Not Supported 00:23:42.176 Read Recovery Levels: Not Supported 00:23:42.176 Endurance Groups: Not Supported 00:23:42.176 Predictable Latency Mode: Not Supported 00:23:42.176 Traffic Based Keep ALive: Not Supported 00:23:42.177 Namespace Granularity: Not Supported 00:23:42.177 SQ Associations: Not Supported 00:23:42.177 UUID List: Not Supported 00:23:42.177 Multi-Domain Subsystem: Not Supported 00:23:42.177 Fixed Capacity Management: Not Supported 00:23:42.177 Variable Capacity Management: Not Supported 00:23:42.177 Delete Endurance Group: Not Supported 00:23:42.177 Delete NVM Set: Not Supported 00:23:42.177 Extended LBA Formats Supported: Not Supported 00:23:42.177 Flexible Data Placement Supported: Not Supported 00:23:42.177 00:23:42.177 Controller Memory Buffer Support 00:23:42.177 ================================ 00:23:42.177 Supported: No 00:23:42.177 00:23:42.177 Persistent Memory Region Support 00:23:42.177 ================================ 00:23:42.177 Supported: No 00:23:42.177 00:23:42.177 Admin Command Set Attributes 00:23:42.177 ============================ 00:23:42.177 Security Send/Receive: Not Supported 00:23:42.177 Format NVM: Not Supported 00:23:42.177 Firmware Activate/Download: Not Supported 00:23:42.177 Namespace Management: Not Supported 00:23:42.177 Device Self-Test: Not Supported 00:23:42.177 Directives: Not Supported 00:23:42.177 NVMe-MI: Not Supported 00:23:42.177 Virtualization Management: Not Supported 00:23:42.177 Doorbell Buffer Config: Not Supported 00:23:42.177 Get LBA Status Capability: Not Supported 00:23:42.177 Command & Feature Lockdown Capability: Not Supported 00:23:42.177 Abort Command Limit: 1 00:23:42.177 Async Event Request Limit: 4 00:23:42.177 Number of Firmware Slots: N/A 00:23:42.177 Firmware Slot 1 Read-Only: N/A 00:23:42.177 Firmware Activation Without Reset: N/A 00:23:42.177 Multiple Update Detection Support: N/A 00:23:42.177 Firmware Update Granularity: No Information Provided 00:23:42.177 Per-Namespace SMART Log: No 00:23:42.177 Asymmetric Namespace Access Log Page: Not Supported 00:23:42.177 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:42.177 Command Effects Log Page: Not Supported 00:23:42.177 Get Log Page Extended Data: Supported 00:23:42.177 Telemetry Log Pages: Not Supported 00:23:42.177 Persistent Event Log Pages: Not Supported 00:23:42.177 Supported Log Pages Log Page: May Support 00:23:42.177 Commands Supported & Effects Log Page: Not Supported 00:23:42.177 Feature Identifiers & Effects Log Page:May Support 00:23:42.177 NVMe-MI Commands & Effects Log Page: May Support 00:23:42.177 Data Area 4 for Telemetry Log: Not Supported 00:23:42.177 Error Log Page Entries Supported: 128 00:23:42.177 Keep Alive: Not Supported 00:23:42.177 00:23:42.177 NVM Command Set Attributes 00:23:42.177 ========================== 00:23:42.177 Submission Queue Entry Size 00:23:42.177 Max: 1 00:23:42.177 Min: 1 00:23:42.177 Completion Queue Entry Size 00:23:42.177 Max: 1 00:23:42.177 Min: 1 00:23:42.177 Number of Namespaces: 0 00:23:42.177 Compare Command: Not Supported 00:23:42.177 Write Uncorrectable Command: Not Supported 00:23:42.177 Dataset Management Command: Not Supported 00:23:42.177 Write Zeroes Command: Not Supported 00:23:42.177 Set Features Save Field: Not Supported 00:23:42.177 Reservations: Not Supported 00:23:42.177 Timestamp: Not Supported 00:23:42.177 Copy: Not Supported 00:23:42.177 Volatile Write Cache: Not Present 00:23:42.177 Atomic Write Unit (Normal): 1 00:23:42.177 Atomic Write Unit (PFail): 1 00:23:42.177 Atomic Compare & Write Unit: 1 00:23:42.177 Fused Compare & Write: Supported 00:23:42.177 Scatter-Gather List 00:23:42.177 SGL Command Set: Supported 00:23:42.177 SGL Keyed: Supported 00:23:42.177 SGL Bit Bucket Descriptor: Not Supported 00:23:42.177 SGL Metadata Pointer: Not Supported 00:23:42.177 Oversized SGL: Not Supported 00:23:42.177 SGL Metadata Address: Not Supported 00:23:42.177 SGL Offset: Supported 00:23:42.177 Transport SGL Data Block: Not Supported 00:23:42.177 Replay Protected Memory Block: Not Supported 00:23:42.177 00:23:42.177 Firmware Slot Information 00:23:42.177 ========================= 00:23:42.177 Active slot: 0 00:23:42.177 00:23:42.177 00:23:42.177 Error Log 00:23:42.177 ========= 00:23:42.177 00:23:42.177 Active Namespaces 00:23:42.177 ================= 00:23:42.177 Discovery Log Page 00:23:42.177 ================== 00:23:42.177 Generation Counter: 2 00:23:42.177 Number of Records: 2 00:23:42.177 Record Format: 0 00:23:42.177 00:23:42.177 Discovery Log Entry 0 00:23:42.177 ---------------------- 00:23:42.177 Transport Type: 1 (RDMA) 00:23:42.177 Address Family: 1 (IPv4) 00:23:42.177 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:42.177 Entry Flags: 00:23:42.177 Duplicate Returned Information: 1 00:23:42.177 Explicit Persistent Connection Support for Discovery: 1 00:23:42.177 Transport Requirements: 00:23:42.177 Secure Channel: Not Required 00:23:42.177 Port ID: 0 (0x0000) 00:23:42.177 Controller ID: 65535 (0xffff) 00:23:42.177 Admin Max SQ Size: 128 00:23:42.177 Transport Service Identifier: 4420 00:23:42.177 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:42.177 Transport Address: 192.168.100.8 00:23:42.177 Transport Specific Address Subtype - RDMA 00:23:42.177 RDMA QP Service Type: 1 (Reliable Connected) 00:23:42.177 RDMA Provider Type: 1 (No provider specified) 00:23:42.177 RDMA CM Service: 1 (RDMA_CM) 00:23:42.177 Discovery Log Entry 1 00:23:42.177 ---------------------- 00:23:42.177 Transport Type: 1 (RDMA) 00:23:42.177 Address Family: 1 (IPv4) 00:23:42.177 Subsystem Type: 2 (NVM Subsystem) 00:23:42.177 Entry Flags: 00:23:42.177 Duplicate Returned Information: 0 00:23:42.177 Explicit Persistent Connection Support for Discovery: 0 00:23:42.177 Transport Requirements: 00:23:42.177 Secure Channel: Not Required 00:23:42.177 Port ID: 0 (0x0000) 00:23:42.177 Controller ID: 65535 (0xffff) 00:23:42.177 Admin Max SQ Size: [2024-06-07 23:14:34.282006] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:42.177 [2024-06-07 23:14:34.282019] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2760 doesn't match qid 00:23:42.177 [2024-06-07 23:14:34.282030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32553 cdw0:5 sqhd:9390 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282035] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2760 doesn't match qid 00:23:42.177 [2024-06-07 23:14:34.282041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32553 cdw0:5 sqhd:9390 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282045] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2760 doesn't match qid 00:23:42.177 [2024-06-07 23:14:34.282053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32553 cdw0:5 sqhd:9390 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282057] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 2760 doesn't match qid 00:23:42.177 [2024-06-07 23:14:34.282062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32553 cdw0:5 sqhd:9390 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282070] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183600 00:23:42.177 [2024-06-07 23:14:34.282076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.177 [2024-06-07 23:14:34.282095] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.177 [2024-06-07 23:14:34.282100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282106] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.177 [2024-06-07 23:14:34.282112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.177 [2024-06-07 23:14:34.282116] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.177 [2024-06-07 23:14:34.282134] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.177 [2024-06-07 23:14:34.282138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282143] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:42.177 [2024-06-07 23:14:34.282147] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:42.177 [2024-06-07 23:14:34.282151] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.177 [2024-06-07 23:14:34.282157] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.177 [2024-06-07 23:14:34.282163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.177 [2024-06-07 23:14:34.282179] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.177 [2024-06-07 23:14:34.282184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:42.177 [2024-06-07 23:14:34.282188] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.177 [2024-06-07 23:14:34.282195] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282221] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282229] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282236] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282264] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282275] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282282] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282305] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282314] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282321] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282342] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282350] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282357] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282381] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282390] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282396] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282420] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282428] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282435] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282461] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282470] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282476] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282498] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282508] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282515] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282544] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282552] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282559] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282579] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282588] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282594] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282624] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282632] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282638] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282666] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282675] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282681] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282717] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282724] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282745] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282754] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282761] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282786] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282794] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282801] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282829] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282837] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282844] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282871] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282880] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282886] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.178 [2024-06-07 23:14:34.282910] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.178 [2024-06-07 23:14:34.282914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:42.178 [2024-06-07 23:14:34.282918] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.178 [2024-06-07 23:14:34.282925] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.282930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.282949] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.282953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.282957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.282964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.282970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.282992] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.282996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283000] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283007] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283032] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283040] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283047] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283069] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283077] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283084] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283108] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283117] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283123] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283154] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283162] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283169] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283196] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283205] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283211] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283234] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283243] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283250] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283276] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283284] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283291] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283313] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283321] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283327] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283351] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283359] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283366] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283388] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283396] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283402] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283424] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283432] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283439] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283468] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283477] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283483] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.179 [2024-06-07 23:14:34.283508] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.179 [2024-06-07 23:14:34.283512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:42.179 [2024-06-07 23:14:34.283516] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.179 [2024-06-07 23:14:34.283523] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283545] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283553] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283560] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283583] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283591] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283598] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283623] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283631] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283637] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283660] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283668] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283674] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283705] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283713] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283720] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283740] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283749] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283755] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283779] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283787] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283794] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283815] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283824] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283830] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283853] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283862] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283868] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283890] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283898] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283905] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283933] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283941] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283947] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.283974] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.283978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.283982] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283989] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.283994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.284019] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.284023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.284027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284034] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.284057] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.284061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.284065] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284072] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.284100] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.284104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.284108] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284115] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.284136] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.284140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.284145] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284153] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.284178] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.284182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:42.180 [2024-06-07 23:14:34.284186] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284193] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.180 [2024-06-07 23:14:34.284198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.180 [2024-06-07 23:14:34.284217] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.180 [2024-06-07 23:14:34.284221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284226] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284232] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284261] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284270] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284276] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284298] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284307] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284313] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284341] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284350] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284356] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284376] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284385] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284393] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284416] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284424] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284431] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284452] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284460] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284467] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284491] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284500] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284506] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284536] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284544] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284550] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284580] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284588] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284595] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284615] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284632] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284639] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284659] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284668] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284674] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284702] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284710] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284717] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284741] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284750] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284756] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284784] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284792] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284799] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284825] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284833] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284840] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284866] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284876] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284883] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284910] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284919] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284925] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.181 [2024-06-07 23:14:34.284931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.181 [2024-06-07 23:14:34.284952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.181 [2024-06-07 23:14:34.284956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:42.181 [2024-06-07 23:14:34.284960] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.284966] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.284972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.182 [2024-06-07 23:14:34.284992] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.182 [2024-06-07 23:14:34.284996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:42.182 [2024-06-07 23:14:34.285001] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.285007] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.289019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.182 [2024-06-07 23:14:34.289042] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.182 [2024-06-07 23:14:34.289047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0019 p:0 m:0 dnr:0 00:23:42.182 [2024-06-07 23:14:34.289051] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.289056] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:42.182 128 00:23:42.182 Transport Service Identifier: 4420 00:23:42.182 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:42.182 Transport Address: 192.168.100.8 00:23:42.182 Transport Specific Address Subtype - RDMA 00:23:42.182 RDMA QP Service Type: 1 (Reliable Connected) 00:23:42.182 RDMA Provider Type: 1 (No provider specified) 00:23:42.182 RDMA CM Service: 1 (RDMA_CM) 00:23:42.182 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:42.182 [2024-06-07 23:14:34.352198] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:42.182 [2024-06-07 23:14:34.352240] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016986 ] 00:23:42.182 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.182 [2024-06-07 23:14:34.391532] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:42.182 [2024-06-07 23:14:34.391601] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:42.182 [2024-06-07 23:14:34.391616] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:42.182 [2024-06-07 23:14:34.391620] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:42.182 [2024-06-07 23:14:34.391640] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:42.182 [2024-06-07 23:14:34.402403] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:42.182 [2024-06-07 23:14:34.416663] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:42.182 [2024-06-07 23:14:34.416672] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:42.182 [2024-06-07 23:14:34.416677] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416682] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416686] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416690] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416695] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416699] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416703] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416707] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416711] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416715] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416719] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416723] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416728] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416732] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416736] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416740] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416744] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416748] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416752] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416756] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416761] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416767] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416771] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416775] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416779] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416784] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416788] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416792] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416796] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416800] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416804] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416808] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:42.182 [2024-06-07 23:14:34.416812] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:42.182 [2024-06-07 23:14:34.416815] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:42.182 [2024-06-07 23:14:34.416827] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.416838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183600 00:23:42.182 [2024-06-07 23:14:34.422014] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.182 [2024-06-07 23:14:34.422022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:42.182 [2024-06-07 23:14:34.422027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.422032] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:42.182 [2024-06-07 23:14:34.422037] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:42.182 [2024-06-07 23:14:34.422041] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:42.182 [2024-06-07 23:14:34.422050] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.422057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.182 [2024-06-07 23:14:34.422076] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.182 [2024-06-07 23:14:34.422080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:42.182 [2024-06-07 23:14:34.422084] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:42.182 [2024-06-07 23:14:34.422089] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.422093] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:42.182 [2024-06-07 23:14:34.422099] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.422105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.182 [2024-06-07 23:14:34.422121] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.182 [2024-06-07 23:14:34.422126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:42.182 [2024-06-07 23:14:34.422130] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:42.182 [2024-06-07 23:14:34.422134] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.422139] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:42.182 [2024-06-07 23:14:34.422145] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.182 [2024-06-07 23:14:34.422151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.182 [2024-06-07 23:14:34.422168] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.182 [2024-06-07 23:14:34.422173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:42.182 [2024-06-07 23:14:34.422177] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:42.182 [2024-06-07 23:14:34.422181] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422187] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.183 [2024-06-07 23:14:34.422208] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422216] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:42.183 [2024-06-07 23:14:34.422220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:42.183 [2024-06-07 23:14:34.422224] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:42.183 [2024-06-07 23:14:34.422333] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:42.183 [2024-06-07 23:14:34.422336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:42.183 [2024-06-07 23:14:34.422343] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.183 [2024-06-07 23:14:34.422371] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:42.183 [2024-06-07 23:14:34.422383] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422390] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.183 [2024-06-07 23:14:34.422415] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422423] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:42.183 [2024-06-07 23:14:34.422427] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422431] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422436] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:42.183 [2024-06-07 23:14:34.422447] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422454] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183600 00:23:42.183 [2024-06-07 23:14:34.422499] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422509] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:42.183 [2024-06-07 23:14:34.422513] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:42.183 [2024-06-07 23:14:34.422517] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:42.183 [2024-06-07 23:14:34.422521] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:42.183 [2024-06-07 23:14:34.422524] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:42.183 [2024-06-07 23:14:34.422528] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422532] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422539] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.183 [2024-06-07 23:14:34.422576] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422587] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.183 [2024-06-07 23:14:34.422597] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.183 [2024-06-07 23:14:34.422609] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.183 [2024-06-07 23:14:34.422618] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.183 [2024-06-07 23:14:34.422627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422631] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422643] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.183 [2024-06-07 23:14:34.422668] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422676] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:42.183 [2024-06-07 23:14:34.422681] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422685] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422691] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422696] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422701] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.183 [2024-06-07 23:14:34.422725] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422770] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422774] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422780] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422787] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183600 00:23:42.183 [2024-06-07 23:14:34.422818] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422834] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:42.183 [2024-06-07 23:14:34.422842] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422846] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422852] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422858] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.183 [2024-06-07 23:14:34.422864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183600 00:23:42.183 [2024-06-07 23:14:34.422889] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.183 [2024-06-07 23:14:34.422894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:42.183 [2024-06-07 23:14:34.422902] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:42.183 [2024-06-07 23:14:34.422906] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.422912] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.422919] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.422924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183600 00:23:42.184 [2024-06-07 23:14:34.422952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.422956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.422964] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.422968] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.422974] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.422980] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.422985] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.422989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.422993] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:42.184 [2024-06-07 23:14:34.422997] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:42.184 [2024-06-07 23:14:34.423002] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:42.184 [2024-06-07 23:14:34.423018] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.184 [2024-06-07 23:14:34.423032] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.184 [2024-06-07 23:14:34.423046] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423054] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423061] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.184 [2024-06-07 23:14:34.423072] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423081] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423089] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423097] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423103] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.184 [2024-06-07 23:14:34.423127] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423135] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423141] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.184 [2024-06-07 23:14:34.423169] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423177] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423185] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183600 00:23:42.184 [2024-06-07 23:14:34.423197] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183600 00:23:42.184 [2024-06-07 23:14:34.423209] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183600 00:23:42.184 [2024-06-07 23:14:34.423224] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183600 00:23:42.184 [2024-06-07 23:14:34.423235] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423248] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423259] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423269] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423273] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423283] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.184 [2024-06-07 23:14:34.423290] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.184 [2024-06-07 23:14:34.423294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:42.184 [2024-06-07 23:14:34.423300] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.184 ===================================================== 00:23:42.184 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:42.184 ===================================================== 00:23:42.184 Controller Capabilities/Features 00:23:42.184 ================================ 00:23:42.184 Vendor ID: 8086 00:23:42.184 Subsystem Vendor ID: 8086 00:23:42.184 Serial Number: SPDK00000000000001 00:23:42.184 Model Number: SPDK bdev Controller 00:23:42.184 Firmware Version: 24.09 00:23:42.184 Recommended Arb Burst: 6 00:23:42.184 IEEE OUI Identifier: e4 d2 5c 00:23:42.184 Multi-path I/O 00:23:42.184 May have multiple subsystem ports: Yes 00:23:42.184 May have multiple controllers: Yes 00:23:42.184 Associated with SR-IOV VF: No 00:23:42.184 Max Data Transfer Size: 131072 00:23:42.184 Max Number of Namespaces: 32 00:23:42.184 Max Number of I/O Queues: 127 00:23:42.184 NVMe Specification Version (VS): 1.3 00:23:42.184 NVMe Specification Version (Identify): 1.3 00:23:42.184 Maximum Queue Entries: 128 00:23:42.184 Contiguous Queues Required: Yes 00:23:42.184 Arbitration Mechanisms Supported 00:23:42.184 Weighted Round Robin: Not Supported 00:23:42.184 Vendor Specific: Not Supported 00:23:42.184 Reset Timeout: 15000 ms 00:23:42.184 Doorbell Stride: 4 bytes 00:23:42.184 NVM Subsystem Reset: Not Supported 00:23:42.184 Command Sets Supported 00:23:42.184 NVM Command Set: Supported 00:23:42.184 Boot Partition: Not Supported 00:23:42.184 Memory Page Size Minimum: 4096 bytes 00:23:42.184 Memory Page Size Maximum: 4096 bytes 00:23:42.184 Persistent Memory Region: Not Supported 00:23:42.184 Optional Asynchronous Events Supported 00:23:42.184 Namespace Attribute Notices: Supported 00:23:42.184 Firmware Activation Notices: Not Supported 00:23:42.184 ANA Change Notices: Not Supported 00:23:42.184 PLE Aggregate Log Change Notices: Not Supported 00:23:42.184 LBA Status Info Alert Notices: Not Supported 00:23:42.185 EGE Aggregate Log Change Notices: Not Supported 00:23:42.185 Normal NVM Subsystem Shutdown event: Not Supported 00:23:42.185 Zone Descriptor Change Notices: Not Supported 00:23:42.185 Discovery Log Change Notices: Not Supported 00:23:42.185 Controller Attributes 00:23:42.185 128-bit Host Identifier: Supported 00:23:42.185 Non-Operational Permissive Mode: Not Supported 00:23:42.185 NVM Sets: Not Supported 00:23:42.185 Read Recovery Levels: Not Supported 00:23:42.185 Endurance Groups: Not Supported 00:23:42.185 Predictable Latency Mode: Not Supported 00:23:42.185 Traffic Based Keep ALive: Not Supported 00:23:42.185 Namespace Granularity: Not Supported 00:23:42.185 SQ Associations: Not Supported 00:23:42.185 UUID List: Not Supported 00:23:42.185 Multi-Domain Subsystem: Not Supported 00:23:42.185 Fixed Capacity Management: Not Supported 00:23:42.185 Variable Capacity Management: Not Supported 00:23:42.185 Delete Endurance Group: Not Supported 00:23:42.185 Delete NVM Set: Not Supported 00:23:42.185 Extended LBA Formats Supported: Not Supported 00:23:42.185 Flexible Data Placement Supported: Not Supported 00:23:42.185 00:23:42.185 Controller Memory Buffer Support 00:23:42.185 ================================ 00:23:42.185 Supported: No 00:23:42.185 00:23:42.185 Persistent Memory Region Support 00:23:42.185 ================================ 00:23:42.185 Supported: No 00:23:42.185 00:23:42.185 Admin Command Set Attributes 00:23:42.185 ============================ 00:23:42.185 Security Send/Receive: Not Supported 00:23:42.185 Format NVM: Not Supported 00:23:42.185 Firmware Activate/Download: Not Supported 00:23:42.185 Namespace Management: Not Supported 00:23:42.185 Device Self-Test: Not Supported 00:23:42.185 Directives: Not Supported 00:23:42.185 NVMe-MI: Not Supported 00:23:42.185 Virtualization Management: Not Supported 00:23:42.185 Doorbell Buffer Config: Not Supported 00:23:42.185 Get LBA Status Capability: Not Supported 00:23:42.185 Command & Feature Lockdown Capability: Not Supported 00:23:42.185 Abort Command Limit: 4 00:23:42.185 Async Event Request Limit: 4 00:23:42.185 Number of Firmware Slots: N/A 00:23:42.185 Firmware Slot 1 Read-Only: N/A 00:23:42.185 Firmware Activation Without Reset: N/A 00:23:42.185 Multiple Update Detection Support: N/A 00:23:42.185 Firmware Update Granularity: No Information Provided 00:23:42.185 Per-Namespace SMART Log: No 00:23:42.185 Asymmetric Namespace Access Log Page: Not Supported 00:23:42.185 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:42.185 Command Effects Log Page: Supported 00:23:42.185 Get Log Page Extended Data: Supported 00:23:42.185 Telemetry Log Pages: Not Supported 00:23:42.185 Persistent Event Log Pages: Not Supported 00:23:42.185 Supported Log Pages Log Page: May Support 00:23:42.185 Commands Supported & Effects Log Page: Not Supported 00:23:42.185 Feature Identifiers & Effects Log Page:May Support 00:23:42.185 NVMe-MI Commands & Effects Log Page: May Support 00:23:42.185 Data Area 4 for Telemetry Log: Not Supported 00:23:42.185 Error Log Page Entries Supported: 128 00:23:42.185 Keep Alive: Supported 00:23:42.185 Keep Alive Granularity: 10000 ms 00:23:42.185 00:23:42.185 NVM Command Set Attributes 00:23:42.185 ========================== 00:23:42.185 Submission Queue Entry Size 00:23:42.185 Max: 64 00:23:42.185 Min: 64 00:23:42.185 Completion Queue Entry Size 00:23:42.185 Max: 16 00:23:42.185 Min: 16 00:23:42.185 Number of Namespaces: 32 00:23:42.185 Compare Command: Supported 00:23:42.185 Write Uncorrectable Command: Not Supported 00:23:42.185 Dataset Management Command: Supported 00:23:42.185 Write Zeroes Command: Supported 00:23:42.185 Set Features Save Field: Not Supported 00:23:42.185 Reservations: Supported 00:23:42.185 Timestamp: Not Supported 00:23:42.185 Copy: Supported 00:23:42.185 Volatile Write Cache: Present 00:23:42.185 Atomic Write Unit (Normal): 1 00:23:42.185 Atomic Write Unit (PFail): 1 00:23:42.185 Atomic Compare & Write Unit: 1 00:23:42.185 Fused Compare & Write: Supported 00:23:42.185 Scatter-Gather List 00:23:42.185 SGL Command Set: Supported 00:23:42.185 SGL Keyed: Supported 00:23:42.185 SGL Bit Bucket Descriptor: Not Supported 00:23:42.185 SGL Metadata Pointer: Not Supported 00:23:42.185 Oversized SGL: Not Supported 00:23:42.185 SGL Metadata Address: Not Supported 00:23:42.185 SGL Offset: Supported 00:23:42.185 Transport SGL Data Block: Not Supported 00:23:42.185 Replay Protected Memory Block: Not Supported 00:23:42.185 00:23:42.185 Firmware Slot Information 00:23:42.185 ========================= 00:23:42.185 Active slot: 1 00:23:42.185 Slot 1 Firmware Revision: 24.09 00:23:42.185 00:23:42.185 00:23:42.185 Commands Supported and Effects 00:23:42.185 ============================== 00:23:42.185 Admin Commands 00:23:42.185 -------------- 00:23:42.185 Get Log Page (02h): Supported 00:23:42.185 Identify (06h): Supported 00:23:42.185 Abort (08h): Supported 00:23:42.185 Set Features (09h): Supported 00:23:42.185 Get Features (0Ah): Supported 00:23:42.185 Asynchronous Event Request (0Ch): Supported 00:23:42.185 Keep Alive (18h): Supported 00:23:42.185 I/O Commands 00:23:42.185 ------------ 00:23:42.185 Flush (00h): Supported LBA-Change 00:23:42.185 Write (01h): Supported LBA-Change 00:23:42.185 Read (02h): Supported 00:23:42.185 Compare (05h): Supported 00:23:42.185 Write Zeroes (08h): Supported LBA-Change 00:23:42.185 Dataset Management (09h): Supported LBA-Change 00:23:42.185 Copy (19h): Supported LBA-Change 00:23:42.185 Unknown (79h): Supported LBA-Change 00:23:42.185 Unknown (7Ah): Supported 00:23:42.185 00:23:42.185 Error Log 00:23:42.185 ========= 00:23:42.185 00:23:42.185 Arbitration 00:23:42.185 =========== 00:23:42.185 Arbitration Burst: 1 00:23:42.185 00:23:42.185 Power Management 00:23:42.185 ================ 00:23:42.185 Number of Power States: 1 00:23:42.185 Current Power State: Power State #0 00:23:42.185 Power State #0: 00:23:42.185 Max Power: 0.00 W 00:23:42.185 Non-Operational State: Operational 00:23:42.185 Entry Latency: Not Reported 00:23:42.185 Exit Latency: Not Reported 00:23:42.185 Relative Read Throughput: 0 00:23:42.185 Relative Read Latency: 0 00:23:42.185 Relative Write Throughput: 0 00:23:42.185 Relative Write Latency: 0 00:23:42.185 Idle Power: Not Reported 00:23:42.185 Active Power: Not Reported 00:23:42.185 Non-Operational Permissive Mode: Not Supported 00:23:42.185 00:23:42.185 Health Information 00:23:42.185 ================== 00:23:42.185 Critical Warnings: 00:23:42.185 Available Spare Space: OK 00:23:42.185 Temperature: OK 00:23:42.185 Device Reliability: OK 00:23:42.185 Read Only: No 00:23:42.185 Volatile Memory Backup: OK 00:23:42.185 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:42.185 Temperature Threshold: [2024-06-07 23:14:34.423373] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183600 00:23:42.185 [2024-06-07 23:14:34.423380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.185 [2024-06-07 23:14:34.423402] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.185 [2024-06-07 23:14:34.423406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:42.185 [2024-06-07 23:14:34.423410] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.185 [2024-06-07 23:14:34.423430] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:42.185 [2024-06-07 23:14:34.423437] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62825 doesn't match qid 00:23:42.185 [2024-06-07 23:14:34.423448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32700 cdw0:5 sqhd:4390 p:0 m:0 dnr:0 00:23:42.185 [2024-06-07 23:14:34.423452] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62825 doesn't match qid 00:23:42.185 [2024-06-07 23:14:34.423458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32700 cdw0:5 sqhd:4390 p:0 m:0 dnr:0 00:23:42.185 [2024-06-07 23:14:34.423462] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62825 doesn't match qid 00:23:42.185 [2024-06-07 23:14:34.423468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32700 cdw0:5 sqhd:4390 p:0 m:0 dnr:0 00:23:42.185 [2024-06-07 23:14:34.423472] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62825 doesn't match qid 00:23:42.185 [2024-06-07 23:14:34.423477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32700 cdw0:5 sqhd:4390 p:0 m:0 dnr:0 00:23:42.185 [2024-06-07 23:14:34.423486] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183600 00:23:42.185 [2024-06-07 23:14:34.423493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.185 [2024-06-07 23:14:34.423511] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423522] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423531] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423548] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423556] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:42.186 [2024-06-07 23:14:34.423560] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:42.186 [2024-06-07 23:14:34.423564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423570] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423595] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423604] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423610] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423634] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423642] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423676] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423684] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423691] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423715] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423724] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423731] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423761] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423770] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423776] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423801] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423810] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423816] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423845] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423854] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423861] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423881] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423890] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423896] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423921] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423929] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423936] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423957] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.423961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.423965] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423971] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.423977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.423999] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.424004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.424008] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424019] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.424045] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.424049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.424053] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424059] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.424086] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.424090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.424094] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424100] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.424121] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.424125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.424129] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424135] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.424160] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.424164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.424168] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424175] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.186 [2024-06-07 23:14:34.424207] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.186 [2024-06-07 23:14:34.424211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:42.186 [2024-06-07 23:14:34.424215] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424222] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.186 [2024-06-07 23:14:34.424227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424245] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424253] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424260] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424280] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424288] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424295] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424325] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424333] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424340] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424363] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424371] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424378] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424404] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424412] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424440] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424449] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424455] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424481] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424490] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424496] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424516] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424524] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424531] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424552] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424561] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424567] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424593] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424602] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424608] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424630] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424638] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424646] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424668] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424676] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424682] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424717] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424723] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424748] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424756] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424763] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424793] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424801] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424808] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424833] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424841] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424847] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424868] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424876] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424883] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.187 [2024-06-07 23:14:34.424889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.187 [2024-06-07 23:14:34.424908] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.187 [2024-06-07 23:14:34.424912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:42.187 [2024-06-07 23:14:34.424916] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.424923] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.424929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.424949] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.424953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.424957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.424964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.424969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.424985] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.424989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.424993] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425000] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425041] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425068] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425076] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425082] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425109] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425119] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425125] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425151] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425159] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425166] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425189] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425197] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425204] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425231] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425240] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425246] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425268] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425276] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425282] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425308] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425317] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425323] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425346] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425356] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425362] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425383] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425391] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425398] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425422] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425430] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425437] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425466] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425474] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425481] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425508] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:42.188 [2024-06-07 23:14:34.425516] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425523] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.188 [2024-06-07 23:14:34.425528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.188 [2024-06-07 23:14:34.425546] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.188 [2024-06-07 23:14:34.425550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425554] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425588] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425598] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425604] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425632] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425640] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425646] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425674] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425682] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425689] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425715] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425723] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425729] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425754] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425763] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425769] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425796] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425805] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425811] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425831] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425841] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425847] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425875] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425883] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425890] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425916] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425930] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.425960] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425967] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.425972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.425994] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.425998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.426002] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.430014] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.430023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:42.189 [2024-06-07 23:14:34.430039] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:42.189 [2024-06-07 23:14:34.430044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0018 p:0 m:0 dnr:0 00:23:42.189 [2024-06-07 23:14:34.430048] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183600 00:23:42.189 [2024-06-07 23:14:34.430053] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:42.448 0 Kelvin (-273 Celsius) 00:23:42.448 Available Spare: 0% 00:23:42.448 Available Spare Threshold: 0% 00:23:42.448 Life Percentage Used: 0% 00:23:42.448 Data Units Read: 0 00:23:42.448 Data Units Written: 0 00:23:42.448 Host Read Commands: 0 00:23:42.448 Host Write Commands: 0 00:23:42.448 Controller Busy Time: 0 minutes 00:23:42.448 Power Cycles: 0 00:23:42.448 Power On Hours: 0 hours 00:23:42.448 Unsafe Shutdowns: 0 00:23:42.448 Unrecoverable Media Errors: 0 00:23:42.448 Lifetime Error Log Entries: 0 00:23:42.448 Warning Temperature Time: 0 minutes 00:23:42.448 Critical Temperature Time: 0 minutes 00:23:42.448 00:23:42.448 Number of Queues 00:23:42.448 ================ 00:23:42.448 Number of I/O Submission Queues: 127 00:23:42.448 Number of I/O Completion Queues: 127 00:23:42.448 00:23:42.448 Active Namespaces 00:23:42.448 ================= 00:23:42.448 Namespace ID:1 00:23:42.448 Error Recovery Timeout: Unlimited 00:23:42.448 Command Set Identifier: NVM (00h) 00:23:42.448 Deallocate: Supported 00:23:42.448 Deallocated/Unwritten Error: Not Supported 00:23:42.448 Deallocated Read Value: Unknown 00:23:42.448 Deallocate in Write Zeroes: Not Supported 00:23:42.448 Deallocated Guard Field: 0xFFFF 00:23:42.448 Flush: Supported 00:23:42.448 Reservation: Supported 00:23:42.448 Namespace Sharing Capabilities: Multiple Controllers 00:23:42.448 Size (in LBAs): 131072 (0GiB) 00:23:42.448 Capacity (in LBAs): 131072 (0GiB) 00:23:42.448 Utilization (in LBAs): 131072 (0GiB) 00:23:42.448 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:42.448 EUI64: ABCDEF0123456789 00:23:42.448 UUID: be1665d3-385b-4095-ae1c-0b5fb9acad90 00:23:42.448 Thin Provisioning: Not Supported 00:23:42.448 Per-NS Atomic Units: Yes 00:23:42.448 Atomic Boundary Size (Normal): 0 00:23:42.448 Atomic Boundary Size (PFail): 0 00:23:42.448 Atomic Boundary Offset: 0 00:23:42.448 Maximum Single Source Range Length: 65535 00:23:42.448 Maximum Copy Length: 65535 00:23:42.448 Maximum Source Range Count: 1 00:23:42.448 NGUID/EUI64 Never Reused: No 00:23:42.448 Namespace Write Protected: No 00:23:42.448 Number of LBA Formats: 1 00:23:42.448 Current LBA Format: LBA Format #00 00:23:42.448 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:42.448 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:42.448 rmmod nvme_rdma 00:23:42.448 rmmod nvme_fabrics 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1016733 ']' 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1016733 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 1016733 ']' 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 1016733 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1016733 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:42.448 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:42.449 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1016733' 00:23:42.449 killing process with pid 1016733 00:23:42.449 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@968 -- # kill 1016733 00:23:42.449 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@973 -- # wait 1016733 00:23:42.707 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:42.707 23:14:34 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:42.707 00:23:42.707 real 0m7.757s 00:23:42.707 user 0m7.960s 00:23:42.707 sys 0m4.903s 00:23:42.707 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:42.707 23:14:34 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.707 ************************************ 00:23:42.707 END TEST nvmf_identify 00:23:42.707 ************************************ 00:23:42.707 23:14:34 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:42.707 23:14:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:42.707 23:14:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:42.707 23:14:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:42.707 ************************************ 00:23:42.707 START TEST nvmf_perf 00:23:42.707 ************************************ 00:23:42.707 23:14:34 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:42.707 * Looking for test storage... 00:23:42.707 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:42.707 23:14:34 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.964 23:14:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:42.964 23:14:35 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.964 23:14:35 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.964 23:14:35 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.964 23:14:35 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.964 23:14:35 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:42.965 23:14:35 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:49.548 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:49.548 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:49.548 Found net devices under 0000:da:00.0: mlx_0_0 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:49.548 Found net devices under 0000:da:00.1: mlx_0_1 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:49.548 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:49.549 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:49.549 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:49.549 altname enp218s0f0np0 00:23:49.549 altname ens818f0np0 00:23:49.549 inet 192.168.100.8/24 scope global mlx_0_0 00:23:49.549 valid_lft forever preferred_lft forever 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:49.549 23:14:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:49.549 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:49.549 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:49.549 altname enp218s0f1np1 00:23:49.549 altname ens818f1np1 00:23:49.549 inet 192.168.100.9/24 scope global mlx_0_1 00:23:49.549 valid_lft forever preferred_lft forever 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:49.549 192.168.100.9' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:49.549 192.168.100.9' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:49.549 192.168.100.9' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1020526 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1020526 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 1020526 ']' 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:49.549 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.549 [2024-06-07 23:14:41.159325] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:23:49.549 [2024-06-07 23:14:41.159370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.549 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.549 [2024-06-07 23:14:41.218993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.549 [2024-06-07 23:14:41.297874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.549 [2024-06-07 23:14:41.297916] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.549 [2024-06-07 23:14:41.297923] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.549 [2024-06-07 23:14:41.297928] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.549 [2024-06-07 23:14:41.297933] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.549 [2024-06-07 23:14:41.297976] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.549 [2024-06-07 23:14:41.298079] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.549 [2024-06-07 23:14:41.298102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.549 [2024-06-07 23:14:41.298103] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:49.808 23:14:41 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:53.093 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:53.093 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:53.093 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:23:53.093 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:53.352 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:53.352 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:23:53.352 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:53.352 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:23:53.352 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:23:53.352 [2024-06-07 23:14:45.557270] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:23:53.352 [2024-06-07 23:14:45.577051] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf9be80/0xfa99c0) succeed. 00:23:53.352 [2024-06-07 23:14:45.586321] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf9d4c0/0x1029a00) succeed. 00:23:53.610 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.610 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:53.610 23:14:45 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.869 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:53.869 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:54.127 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:54.127 [2024-06-07 23:14:46.373747] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:54.127 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:54.386 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:23:54.386 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:23:54.386 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:54.386 23:14:46 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:23:55.762 Initializing NVMe Controllers 00:23:55.762 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:23:55.762 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:23:55.762 Initialization complete. Launching workers. 00:23:55.762 ======================================================== 00:23:55.762 Latency(us) 00:23:55.762 Device Information : IOPS MiB/s Average min max 00:23:55.762 PCIE (0000:5f:00.0) NSID 1 from core 0: 99729.38 389.57 320.48 29.64 4244.44 00:23:55.762 ======================================================== 00:23:55.762 Total : 99729.38 389.57 320.48 29.64 4244.44 00:23:55.762 00:23:55.762 23:14:47 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:55.762 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.113 Initializing NVMe Controllers 00:23:59.113 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.113 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.113 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.113 Initialization complete. Launching workers. 00:23:59.113 ======================================================== 00:23:59.113 Latency(us) 00:23:59.113 Device Information : IOPS MiB/s Average min max 00:23:59.113 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6711.00 26.21 148.79 47.52 4091.95 00:23:59.113 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5234.00 20.45 190.85 74.27 4107.31 00:23:59.114 ======================================================== 00:23:59.114 Total : 11945.00 46.66 167.22 47.52 4107.31 00:23:59.114 00:23:59.114 23:14:51 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:59.114 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.397 Initializing NVMe Controllers 00:24:02.397 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.397 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.397 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:02.397 Initialization complete. Launching workers. 00:24:02.397 ======================================================== 00:24:02.397 Latency(us) 00:24:02.397 Device Information : IOPS MiB/s Average min max 00:24:02.397 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18264.06 71.34 1752.50 482.64 8408.10 00:24:02.398 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4028.70 15.74 8002.83 4753.36 11057.67 00:24:02.398 ======================================================== 00:24:02.398 Total : 22292.76 87.08 2882.05 482.64 11057.67 00:24:02.398 00:24:02.398 23:14:54 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:24:02.398 23:14:54 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:02.398 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.672 Initializing NVMe Controllers 00:24:07.672 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.672 Controller IO queue size 128, less than required. 00:24:07.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.672 Controller IO queue size 128, less than required. 00:24:07.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.672 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.672 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.672 Initialization complete. Launching workers. 00:24:07.672 ======================================================== 00:24:07.672 Latency(us) 00:24:07.672 Device Information : IOPS MiB/s Average min max 00:24:07.672 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3536.50 884.12 36356.04 15308.70 78299.48 00:24:07.672 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3648.00 912.00 34751.14 15292.43 56864.57 00:24:07.672 ======================================================== 00:24:07.672 Total : 7184.50 1796.12 35541.14 15292.43 78299.48 00:24:07.672 00:24:07.672 23:14:58 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:24:07.672 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.672 No valid NVMe controllers or AIO or URING devices found 00:24:07.672 Initializing NVMe Controllers 00:24:07.672 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.673 Controller IO queue size 128, less than required. 00:24:07.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.673 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:07.673 Controller IO queue size 128, less than required. 00:24:07.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.673 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:07.673 WARNING: Some requested NVMe devices were skipped 00:24:07.673 23:14:59 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:24:07.673 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.860 Initializing NVMe Controllers 00:24:11.860 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.860 Controller IO queue size 128, less than required. 00:24:11.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:11.860 Controller IO queue size 128, less than required. 00:24:11.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:11.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.860 Initialization complete. Launching workers. 00:24:11.860 00:24:11.860 ==================== 00:24:11.860 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:11.860 RDMA transport: 00:24:11.860 dev name: mlx5_0 00:24:11.860 polls: 393562 00:24:11.860 idle_polls: 390002 00:24:11.860 completions: 43550 00:24:11.860 queued_requests: 1 00:24:11.860 total_send_wrs: 21775 00:24:11.860 send_doorbell_updates: 3280 00:24:11.860 total_recv_wrs: 21902 00:24:11.860 recv_doorbell_updates: 3281 00:24:11.860 --------------------------------- 00:24:11.860 00:24:11.860 ==================== 00:24:11.860 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:11.860 RDMA transport: 00:24:11.860 dev name: mlx5_0 00:24:11.860 polls: 400439 00:24:11.860 idle_polls: 400172 00:24:11.860 completions: 19906 00:24:11.860 queued_requests: 1 00:24:11.860 total_send_wrs: 9953 00:24:11.860 send_doorbell_updates: 253 00:24:11.860 total_recv_wrs: 10080 00:24:11.860 recv_doorbell_updates: 254 00:24:11.860 --------------------------------- 00:24:11.860 ======================================================== 00:24:11.860 Latency(us) 00:24:11.860 Device Information : IOPS MiB/s Average min max 00:24:11.860 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5442.60 1360.65 23519.43 10810.69 63249.10 00:24:11.860 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2487.59 621.90 51369.32 28236.64 79994.62 00:24:11.860 ======================================================== 00:24:11.860 Total : 7930.20 1982.55 32255.55 10810.69 79994.62 00:24:11.860 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:11.860 rmmod nvme_rdma 00:24:11.860 rmmod nvme_fabrics 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1020526 ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1020526 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 1020526 ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 1020526 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1020526 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1020526' 00:24:11.860 killing process with pid 1020526 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@968 -- # kill 1020526 00:24:11.860 23:15:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@973 -- # wait 1020526 00:24:14.393 23:15:06 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.393 23:15:06 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:14.393 00:24:14.393 real 0m31.149s 00:24:14.393 user 1m41.186s 00:24:14.393 sys 0m5.588s 00:24:14.393 23:15:06 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:14.393 23:15:06 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.393 ************************************ 00:24:14.393 END TEST nvmf_perf 00:24:14.393 ************************************ 00:24:14.393 23:15:06 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:24:14.393 23:15:06 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:14.393 23:15:06 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:14.393 23:15:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:14.393 ************************************ 00:24:14.393 START TEST nvmf_fio_host 00:24:14.393 ************************************ 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:24:14.393 * Looking for test storage... 00:24:14.393 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.393 23:15:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.394 23:15:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:19.668 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:19.668 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:19.668 Found net devices under 0000:da:00.0: mlx_0_0 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:19.668 Found net devices under 0000:da:00.1: mlx_0_1 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:19.668 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:19.928 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:19.928 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:19.928 altname enp218s0f0np0 00:24:19.928 altname ens818f0np0 00:24:19.928 inet 192.168.100.8/24 scope global mlx_0_0 00:24:19.928 valid_lft forever preferred_lft forever 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:19.928 23:15:11 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:19.928 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:19.928 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:19.928 altname enp218s0f1np1 00:24:19.928 altname ens818f1np1 00:24:19.928 inet 192.168.100.9/24 scope global mlx_0_1 00:24:19.928 valid_lft forever preferred_lft forever 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:19.928 192.168.100.9' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:19.928 192.168.100.9' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:19.928 192.168.100.9' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1028479 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1028479 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 1028479 ']' 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:19.928 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.928 [2024-06-07 23:15:12.142641] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:24:19.928 [2024-06-07 23:15:12.142695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.929 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.929 [2024-06-07 23:15:12.204775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.188 [2024-06-07 23:15:12.284901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.188 [2024-06-07 23:15:12.284943] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.188 [2024-06-07 23:15:12.284950] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.188 [2024-06-07 23:15:12.284956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.188 [2024-06-07 23:15:12.284961] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.188 [2024-06-07 23:15:12.285019] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.188 [2024-06-07 23:15:12.285090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.188 [2024-06-07 23:15:12.285214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.188 [2024-06-07 23:15:12.285215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.754 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:20.754 23:15:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:24:20.754 23:15:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:21.013 [2024-06-07 23:15:13.111776] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10759d0/0x1079ec0) succeed. 00:24:21.013 [2024-06-07 23:15:13.120855] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1077010/0x10bb550) succeed. 00:24:21.013 23:15:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:21.013 23:15:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:21.013 23:15:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.271 23:15:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:21.271 Malloc1 00:24:21.271 23:15:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.530 23:15:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:21.788 23:15:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:21.788 [2024-06-07 23:15:14.015076] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:21.788 23:15:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:22.047 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:22.048 23:15:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:22.306 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:22.306 fio-3.35 00:24:22.306 Starting 1 thread 00:24:22.306 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.860 00:24:24.861 test: (groupid=0, jobs=1): err= 0: pid=1028972: Fri Jun 7 23:15:16 2024 00:24:24.861 read: IOPS=17.6k, BW=68.8MiB/s (72.1MB/s)(138MiB/2004msec) 00:24:24.861 slat (nsec): min=1402, max=26700, avg=1600.24, stdev=467.10 00:24:24.861 clat (usec): min=1942, max=6619, avg=3608.68, stdev=101.25 00:24:24.861 lat (usec): min=1965, max=6620, avg=3610.28, stdev=101.20 00:24:24.861 clat percentiles (usec): 00:24:24.861 | 1.00th=[ 3261], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:24:24.861 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:24:24.861 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3654], 00:24:24.861 | 99.00th=[ 3916], 99.50th=[ 3949], 99.90th=[ 4883], 99.95th=[ 5735], 00:24:24.861 | 99.99th=[ 6587] 00:24:24.861 bw ( KiB/s): min=68838, max=71048, per=99.93%, avg=70377.50, stdev=1039.39, samples=4 00:24:24.861 iops : min=17209, max=17762, avg=17594.25, stdev=260.10, samples=4 00:24:24.861 write: IOPS=17.6k, BW=68.8MiB/s (72.1MB/s)(138MiB/2004msec); 0 zone resets 00:24:24.861 slat (nsec): min=1457, max=19807, avg=1708.50, stdev=524.91 00:24:24.861 clat (usec): min=2748, max=6631, avg=3606.96, stdev=94.40 00:24:24.861 lat (usec): min=2759, max=6633, avg=3608.66, stdev=94.35 00:24:24.861 clat percentiles (usec): 00:24:24.861 | 1.00th=[ 3261], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:24:24.861 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:24:24.861 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3654], 00:24:24.861 | 99.00th=[ 3916], 99.50th=[ 3949], 99.90th=[ 4424], 99.95th=[ 5735], 00:24:24.861 | 99.99th=[ 6587] 00:24:24.861 bw ( KiB/s): min=68814, max=71040, per=100.00%, avg=70439.50, stdev=1084.78, samples=4 00:24:24.861 iops : min=17203, max=17760, avg=17609.75, stdev=271.44, samples=4 00:24:24.861 lat (msec) : 2=0.01%, 4=99.80%, 10=0.19% 00:24:24.861 cpu : usr=99.60%, sys=0.00%, ctx=16, majf=0, minf=2 00:24:24.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:24.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.861 issued rwts: total=35282,35288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.861 00:24:24.861 Run status group 0 (all jobs): 00:24:24.861 READ: bw=68.8MiB/s (72.1MB/s), 68.8MiB/s-68.8MiB/s (72.1MB/s-72.1MB/s), io=138MiB (145MB), run=2004-2004msec 00:24:24.861 WRITE: bw=68.8MiB/s (72.1MB/s), 68.8MiB/s-68.8MiB/s (72.1MB/s-72.1MB/s), io=138MiB (145MB), run=2004-2004msec 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:24.861 23:15:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:25.123 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:25.123 fio-3.35 00:24:25.123 Starting 1 thread 00:24:25.123 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.684 00:24:27.684 test: (groupid=0, jobs=1): err= 0: pid=1029536: Fri Jun 7 23:15:19 2024 00:24:27.684 read: IOPS=13.0k, BW=204MiB/s (213MB/s)(403MiB/1978msec) 00:24:27.684 slat (nsec): min=2312, max=32808, avg=2688.62, stdev=983.67 00:24:27.684 clat (usec): min=314, max=8193, avg=1835.07, stdev=1177.63 00:24:27.684 lat (usec): min=318, max=8208, avg=1837.76, stdev=1178.01 00:24:27.684 clat percentiles (usec): 00:24:27.684 | 1.00th=[ 603], 5.00th=[ 848], 10.00th=[ 979], 20.00th=[ 1139], 00:24:27.684 | 30.00th=[ 1270], 40.00th=[ 1369], 50.00th=[ 1500], 60.00th=[ 1631], 00:24:27.684 | 70.00th=[ 1811], 80.00th=[ 2073], 90.00th=[ 2933], 95.00th=[ 5014], 00:24:27.684 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 7832], 00:24:27.684 | 99.99th=[ 8160] 00:24:27.684 bw ( KiB/s): min=100416, max=104096, per=49.19%, avg=102552.00, stdev=1683.86, samples=4 00:24:27.684 iops : min= 6276, max= 6506, avg=6409.50, stdev=105.24, samples=4 00:24:27.684 write: IOPS=7225, BW=113MiB/s (118MB/s)(208MiB/1842msec); 0 zone resets 00:24:27.684 slat (usec): min=27, max=103, avg=29.91, stdev= 5.21 00:24:27.684 clat (usec): min=4179, max=21349, avg=14077.31, stdev=1745.82 00:24:27.684 lat (usec): min=4206, max=21378, avg=14107.21, stdev=1745.68 00:24:27.684 clat percentiles (usec): 00:24:27.684 | 1.00th=[ 8094], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:24:27.684 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:24:27.684 | 70.00th=[14877], 80.00th=[15401], 90.00th=[16188], 95.00th=[16909], 00:24:27.684 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20055], 99.95th=[20055], 00:24:27.684 | 99.99th=[21103] 00:24:27.684 bw ( KiB/s): min=102432, max=107328, per=91.38%, avg=105640.00, stdev=2182.95, samples=4 00:24:27.684 iops : min= 6402, max= 6708, avg=6602.50, stdev=136.43, samples=4 00:24:27.684 lat (usec) : 500=0.24%, 750=1.67%, 1000=5.35% 00:24:27.684 lat (msec) : 2=43.98%, 4=9.73%, 10=5.52%, 20=33.48%, 50=0.04% 00:24:27.684 cpu : usr=97.75%, sys=0.60%, ctx=183, majf=0, minf=1 00:24:27.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:27.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.684 issued rwts: total=25773,13309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.684 00:24:27.684 Run status group 0 (all jobs): 00:24:27.684 READ: bw=204MiB/s (213MB/s), 204MiB/s-204MiB/s (213MB/s-213MB/s), io=403MiB (422MB), run=1978-1978msec 00:24:27.684 WRITE: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=208MiB (218MB), run=1842-1842msec 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:27.684 rmmod nvme_rdma 00:24:27.684 rmmod nvme_fabrics 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1028479 ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1028479 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 1028479 ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 1028479 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1028479 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1028479' 00:24:27.684 killing process with pid 1028479 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 1028479 00:24:27.684 23:15:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 1028479 00:24:27.943 23:15:20 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.943 23:15:20 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:27.943 00:24:27.943 real 0m13.996s 00:24:27.943 user 0m49.101s 00:24:27.943 sys 0m5.343s 00:24:27.943 23:15:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:27.943 23:15:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.943 ************************************ 00:24:27.943 END TEST nvmf_fio_host 00:24:27.943 ************************************ 00:24:27.943 23:15:20 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:27.943 23:15:20 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:27.943 23:15:20 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:27.943 23:15:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:27.943 ************************************ 00:24:27.943 START TEST nvmf_failover 00:24:27.943 ************************************ 00:24:27.943 23:15:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:28.202 * Looking for test storage... 00:24:28.202 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.202 23:15:20 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.203 23:15:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.771 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:34.772 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:34.772 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:34.772 Found net devices under 0000:da:00.0: mlx_0_0 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:34.772 Found net devices under 0000:da:00.1: mlx_0_1 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:34.772 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:34.773 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:34.773 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:34.773 altname enp218s0f0np0 00:24:34.773 altname ens818f0np0 00:24:34.773 inet 192.168.100.8/24 scope global mlx_0_0 00:24:34.773 valid_lft forever preferred_lft forever 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:34.773 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:34.773 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:34.773 altname enp218s0f1np1 00:24:34.773 altname ens818f1np1 00:24:34.773 inet 192.168.100.9/24 scope global mlx_0_1 00:24:34.773 valid_lft forever preferred_lft forever 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:34.773 192.168.100.9' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:34.773 192.168.100.9' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:34.773 192.168.100.9' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1033556 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1033556 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1033556 ']' 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:34.773 23:15:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.773 [2024-06-07 23:15:26.643537] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:24:34.773 [2024-06-07 23:15:26.643585] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.773 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.773 [2024-06-07 23:15:26.704167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.773 [2024-06-07 23:15:26.785398] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.773 [2024-06-07 23:15:26.785431] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.773 [2024-06-07 23:15:26.785438] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.773 [2024-06-07 23:15:26.785444] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.773 [2024-06-07 23:15:26.785449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.773 [2024-06-07 23:15:26.785568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.773 [2024-06-07 23:15:26.785660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.773 [2024-06-07 23:15:26.785661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.340 23:15:27 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:35.599 [2024-06-07 23:15:27.673926] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6a61f0/0x6aa6e0) succeed. 00:24:35.599 [2024-06-07 23:15:27.682866] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6a7790/0x6ebd70) succeed. 00:24:35.599 23:15:27 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:35.858 Malloc0 00:24:35.858 23:15:27 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.117 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.117 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:36.376 [2024-06-07 23:15:28.482752] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:36.376 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:36.376 [2024-06-07 23:15:28.651114] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:36.635 [2024-06-07 23:15:28.827736] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1033832 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1033832 /var/tmp/bdevperf.sock 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1033832 ']' 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:36.635 23:15:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.571 23:15:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:37.571 23:15:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:37.571 23:15:29 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.829 NVMe0n1 00:24:37.829 23:15:29 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.087 00:24:38.087 23:15:30 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1034060 00:24:38.087 23:15:30 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.087 23:15:30 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:39.022 23:15:31 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:39.280 23:15:31 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:42.561 23:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.561 00:24:42.562 23:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:42.562 23:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:45.847 23:15:37 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:45.847 [2024-06-07 23:15:37.977668] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:45.847 23:15:38 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:46.783 23:15:39 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:47.042 23:15:39 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 1034060 00:24:53.612 0 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 1033832 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1033832 ']' 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1033832 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1033832 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1033832' 00:24:53.612 killing process with pid 1033832 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1033832 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1033832 00:24:53.612 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:53.612 [2024-06-07 23:15:28.898961] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:24:53.612 [2024-06-07 23:15:28.899023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033832 ] 00:24:53.612 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.612 [2024-06-07 23:15:28.960518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.612 [2024-06-07 23:15:29.035337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.612 Running I/O for 15 seconds... 00:24:53.612 [2024-06-07 23:15:32.360037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187000 00:24:53.612 [2024-06-07 23:15:32.360264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.612 [2024-06-07 23:15:32.360272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187000 00:24:53.613 [2024-06-07 23:15:32.360509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.613 [2024-06-07 23:15:32.360802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.613 [2024-06-07 23:15:32.360808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.360988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.360996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.614 [2024-06-07 23:15:32.361349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.614 [2024-06-07 23:15:32.361355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.361869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.615 [2024-06-07 23:15:32.361875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.615 [2024-06-07 23:15:32.363859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.615 [2024-06-07 23:15:32.363873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.615 [2024-06-07 23:15:32.363880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24328 len:8 PRP1 0x0 PRP2 0x0 00:24:53.615 [2024-06-07 23:15:32.363888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:32.363927] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:53.616 [2024-06-07 23:15:32.363936] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:53.616 [2024-06-07 23:15:32.363943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.616 [2024-06-07 23:15:32.366714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.616 [2024-06-07 23:15:32.381323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:53.616 [2024-06-07 23:15:32.429608] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:53.616 [2024-06-07 23:15:35.798206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187000 00:24:53.616 [2024-06-07 23:15:35.798543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.616 [2024-06-07 23:15:35.798645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.616 [2024-06-07 23:15:35.798652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:24:53.617 [2024-06-07 23:15:35.798883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.798988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.798994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.617 [2024-06-07 23:15:35.799145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.617 [2024-06-07 23:15:35.799151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:24:53.618 [2024-06-07 23:15:35.799674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.618 [2024-06-07 23:15:35.799681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.618 [2024-06-07 23:15:35.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.799992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.799998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.800006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.800015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.800023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.800029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.800036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.619 [2024-06-07 23:15:35.800043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.801799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.619 [2024-06-07 23:15:35.801812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.619 [2024-06-07 23:15:35.801818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112856 len:8 PRP1 0x0 PRP2 0x0 00:24:53.619 [2024-06-07 23:15:35.801825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:35.801861] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:53.619 [2024-06-07 23:15:35.801870] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:53.619 [2024-06-07 23:15:35.801878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.619 [2024-06-07 23:15:35.804663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.619 [2024-06-07 23:15:35.819109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:53.619 [2024-06-07 23:15:35.858519] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:53.619 [2024-06-07 23:15:40.175033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.619 [2024-06-07 23:15:40.175193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:24:53.619 [2024-06-07 23:15:40.175200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.620 [2024-06-07 23:15:40.175631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.620 [2024-06-07 23:15:40.175744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:24:53.620 [2024-06-07 23:15:40.175751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.175980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.175987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.621 [2024-06-07 23:15:40.175993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.621 [2024-06-07 23:15:40.176007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.621 [2024-06-07 23:15:40.176024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.621 [2024-06-07 23:15:40.176037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.621 [2024-06-07 23:15:40.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187000 00:24:53.621 [2024-06-07 23:15:40.176198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.622 [2024-06-07 23:15:40.176723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.622 [2024-06-07 23:15:40.176746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187000 00:24:53.622 [2024-06-07 23:15:40.176753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187000 00:24:53.623 [2024-06-07 23:15:40.176767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187000 00:24:53.623 [2024-06-07 23:15:40.176782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.176887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:53.623 [2024-06-07 23:15:40.176893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:f3e0 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.178750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.623 [2024-06-07 23:15:40.178762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.623 [2024-06-07 23:15:40.178769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80504 len:8 PRP1 0x0 PRP2 0x0 00:24:53.623 [2024-06-07 23:15:40.178779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.623 [2024-06-07 23:15:40.178817] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:53.623 [2024-06-07 23:15:40.178825] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:53.623 [2024-06-07 23:15:40.178832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.623 [2024-06-07 23:15:40.181614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.623 [2024-06-07 23:15:40.195935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:53.623 [2024-06-07 23:15:40.241472] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:53.623 00:24:53.623 Latency(us) 00:24:53.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:53.623 Verification LBA range: start 0x0 length 0x4000 00:24:53.623 NVMe0n1 : 15.00 14267.91 55.73 311.17 0.00 8756.10 358.89 1014622.11 00:24:53.623 =================================================================================================================== 00:24:53.623 Total : 14267.91 55.73 311.17 0.00 8756.10 358.89 1014622.11 00:24:53.623 Received shutdown signal, test time was about 15.000000 seconds 00:24:53.623 00:24:53.623 Latency(us) 00:24:53.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.623 =================================================================================================================== 00:24:53.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1036580 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1036580 /var/tmp/bdevperf.sock 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1036580 ']' 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:53.623 23:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:54.191 23:15:46 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:54.191 23:15:46 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:54.191 23:15:46 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:54.468 [2024-06-07 23:15:46.596697] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:54.468 23:15:46 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:54.761 [2024-06-07 23:15:46.777339] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:54.761 23:15:46 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.761 NVMe0n1 00:24:55.019 23:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.019 00:24:55.277 23:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.277 00:24:55.277 23:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.277 23:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:55.535 23:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.793 23:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:59.078 23:15:50 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.078 23:15:50 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:59.078 23:15:51 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1037502 00:24:59.078 23:15:51 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.078 23:15:51 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 1037502 00:25:00.014 0 00:25:00.014 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:00.014 [2024-06-07 23:15:45.642587] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:25:00.014 [2024-06-07 23:15:45.642663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036580 ] 00:25:00.014 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.014 [2024-06-07 23:15:45.703786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.014 [2024-06-07 23:15:45.773597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.014 [2024-06-07 23:15:47.878497] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:25:00.014 [2024-06-07 23:15:47.879098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.014 [2024-06-07 23:15:47.879129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.014 [2024-06-07 23:15:47.901045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:00.014 [2024-06-07 23:15:47.917041] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:00.014 Running I/O for 1 seconds... 00:25:00.014 00:25:00.014 Latency(us) 00:25:00.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.014 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:00.014 Verification LBA range: start 0x0 length 0x4000 00:25:00.014 NVMe0n1 : 1.00 18008.42 70.35 0.00 0.00 7062.82 1115.67 11359.57 00:25:00.014 =================================================================================================================== 00:25:00.014 Total : 18008.42 70.35 0.00 0.00 7062.82 1115.67 11359.57 00:25:00.014 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.014 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:00.273 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.531 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.531 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:00.531 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.790 23:15:52 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:04.070 23:15:55 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.070 23:15:55 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 1036580 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1036580 ']' 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1036580 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1036580 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1036580' 00:25:04.070 killing process with pid 1036580 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1036580 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1036580 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:04.070 23:15:56 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:04.328 rmmod nvme_rdma 00:25:04.328 rmmod nvme_fabrics 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1033556 ']' 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1033556 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1033556 ']' 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1033556 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:04.328 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1033556 00:25:04.587 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:04.587 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:04.587 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1033556' 00:25:04.587 killing process with pid 1033556 00:25:04.587 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1033556 00:25:04.587 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1033556 00:25:04.845 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.845 23:15:56 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:04.845 00:25:04.845 real 0m36.714s 00:25:04.845 user 2m3.449s 00:25:04.845 sys 0m6.653s 00:25:04.845 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:04.845 23:15:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:04.845 ************************************ 00:25:04.845 END TEST nvmf_failover 00:25:04.845 ************************************ 00:25:04.845 23:15:56 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:25:04.845 23:15:56 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:04.845 23:15:56 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:04.845 23:15:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:04.845 ************************************ 00:25:04.845 START TEST nvmf_host_discovery 00:25:04.845 ************************************ 00:25:04.845 23:15:56 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:25:04.845 * Looking for test storage... 00:25:04.845 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.845 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:04.846 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:25:04.846 00:25:04.846 real 0m0.115s 00:25:04.846 user 0m0.055s 00:25:04.846 sys 0m0.068s 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:04.846 23:15:57 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.846 ************************************ 00:25:04.846 END TEST nvmf_host_discovery 00:25:04.846 ************************************ 00:25:04.846 23:15:57 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:25:04.846 23:15:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:04.846 23:15:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:04.846 23:15:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:05.105 ************************************ 00:25:05.105 START TEST nvmf_host_multipath_status 00:25:05.105 ************************************ 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:25:05.105 * Looking for test storage... 00:25:05.105 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.105 23:15:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:11.670 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:11.670 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:11.670 Found net devices under 0000:da:00.0: mlx_0_0 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:11.670 Found net devices under 0000:da:00.1: mlx_0_1 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:11.670 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:11.671 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:11.671 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:11.671 altname enp218s0f0np0 00:25:11.671 altname ens818f0np0 00:25:11.671 inet 192.168.100.8/24 scope global mlx_0_0 00:25:11.671 valid_lft forever preferred_lft forever 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:11.671 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:11.671 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:11.671 altname enp218s0f1np1 00:25:11.671 altname ens818f1np1 00:25:11.671 inet 192.168.100.9/24 scope global mlx_0_1 00:25:11.671 valid_lft forever preferred_lft forever 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:11.671 192.168.100.9' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:11.671 192.168.100.9' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:11.671 192.168.100.9' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1042047 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1042047 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1042047 ']' 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:11.671 23:16:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.671 [2024-06-07 23:16:03.446001] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:25:11.671 [2024-06-07 23:16:03.446076] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.671 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.671 [2024-06-07 23:16:03.508755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.671 [2024-06-07 23:16:03.583084] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.671 [2024-06-07 23:16:03.583124] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.671 [2024-06-07 23:16:03.583130] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.671 [2024-06-07 23:16:03.583140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.672 [2024-06-07 23:16:03.583161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.672 [2024-06-07 23:16:03.583209] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.672 [2024-06-07 23:16:03.583211] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1042047 00:25:12.240 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:12.240 [2024-06-07 23:16:04.451281] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x95d360/0x961850) succeed. 00:25:12.240 [2024-06-07 23:16:04.460723] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x95e860/0x9a2ee0) succeed. 00:25:12.499 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:12.499 Malloc0 00:25:12.499 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:12.757 23:16:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.016 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:13.016 [2024-06-07 23:16:05.208426] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:13.016 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:25:13.275 [2024-06-07 23:16:05.376651] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1042311 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1042311 /var/tmp/bdevperf.sock 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1042311 ']' 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:13.275 23:16:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.211 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:14.211 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:25:14.211 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:14.211 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:14.470 Nvme0n1 00:25:14.470 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:14.728 Nvme0n1 00:25:14.728 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:14.728 23:16:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:16.648 23:16:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:16.648 23:16:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:16.910 23:16:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:17.168 23:16:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:18.102 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:18.102 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:18.102 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.102 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.361 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.620 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.620 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.620 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.620 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.878 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.878 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.879 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.879 23:16:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.879 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.879 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.879 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.879 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.175 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.175 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:19.175 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:19.436 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:19.436 23:16:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.812 23:16:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.812 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.812 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.812 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.812 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.070 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.070 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.070 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.070 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.328 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.328 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.329 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.329 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.329 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.329 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.329 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.329 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.587 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.587 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:21.587 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:21.846 23:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:21.846 23:16:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.222 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.481 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.481 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.481 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.481 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.739 23:16:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.997 23:16:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.998 23:16:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:23.998 23:16:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:24.256 23:16:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:24.256 23:16:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.631 23:16:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.889 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.889 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.889 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.889 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.147 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.405 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.405 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:26.405 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:26.663 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:26.663 23:16:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:28.037 23:16:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:28.037 23:16:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:28.037 23:16:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.037 23:16:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.037 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.295 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.295 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.295 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:28.295 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.554 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.812 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.812 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:28.812 23:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:29.069 23:16:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:29.070 23:16:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.445 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.704 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.704 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.704 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.704 23:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.961 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.218 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.218 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:31.475 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:31.476 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:31.476 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:31.733 23:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:32.668 23:16:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:32.668 23:16:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.668 23:16:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.669 23:16:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.927 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.927 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.927 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.927 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.186 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.444 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.444 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.444 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.444 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:33.703 23:16:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:33.962 23:16:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:34.220 23:16:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:35.157 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:35.157 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:35.157 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.157 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.416 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.674 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.674 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.674 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.674 23:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.933 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.192 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.192 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:36.192 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:36.450 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:36.451 23:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.859 23:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.859 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.859 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.859 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.859 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.118 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.118 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.118 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.118 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.376 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.635 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.635 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:38.635 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:38.894 23:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:38.894 23:16:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.270 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.529 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.529 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.529 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.529 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.787 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.787 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.787 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.787 23:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.787 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.787 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:40.787 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.787 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1042311 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1042311 ']' 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1042311 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1042311 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1042311' 00:25:41.045 killing process with pid 1042311 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1042311 00:25:41.045 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1042311 00:25:41.305 Connection closed with partial response: 00:25:41.305 00:25:41.305 00:25:41.305 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1042311 00:25:41.305 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.305 [2024-06-07 23:16:05.424794] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:25:41.305 [2024-06-07 23:16:05.424844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042311 ] 00:25:41.305 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.305 [2024-06-07 23:16:05.478806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.305 [2024-06-07 23:16:05.551980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.305 Running I/O for 90 seconds... 00:25:41.305 [2024-06-07 23:16:18.679272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.305 [2024-06-07 23:16:18.679736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.305 [2024-06-07 23:16:18.679805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:25:41.305 [2024-06-07 23:16:18.679812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.679828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.679843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.679862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.679877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.679892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.679907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.679922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.679937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.679952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.679967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.679981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.679990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.679996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.680232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.680247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.680262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.306 [2024-06-07 23:16:18.680971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.680984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.680991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.681003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.681014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.681027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.681033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.681046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.681052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.306 [2024-06-07 23:16:18.681065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:25:41.306 [2024-06-07 23:16:18.681072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:25:41.307 [2024-06-07 23:16:18.681364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.307 [2024-06-07 23:16:18.681783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.307 [2024-06-07 23:16:18.681798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:18.681881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.681988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.681996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:18.682206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:18.682212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.136700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.136742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.308 [2024-06-07 23:16:31.137538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.308 [2024-06-07 23:16:31.137547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:25:41.308 [2024-06-07 23:16:31.137554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.137786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.137999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.309 [2024-06-07 23:16:31.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:25:41.309 [2024-06-07 23:16:31.138215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.309 [2024-06-07 23:16:31.138225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.310 [2024-06-07 23:16:31.138295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.310 [2024-06-07 23:16:31.138310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.310 [2024-06-07 23:16:31.138482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187000 00:25:41.310 [2024-06-07 23:16:31.138488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.310 Received shutdown signal, test time was about 26.211079 seconds 00:25:41.310 00:25:41.310 Latency(us) 00:25:41.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.310 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.310 Verification LBA range: start 0x0 length 0x4000 00:25:41.310 Nvme0n1 : 26.21 15885.96 62.05 0.00 0.00 8037.79 80.94 3019898.88 00:25:41.310 =================================================================================================================== 00:25:41.310 Total : 15885.96 62.05 0.00 0.00 8037.79 80.94 3019898.88 00:25:41.310 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:41.569 rmmod nvme_rdma 00:25:41.569 rmmod nvme_fabrics 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1042047 ']' 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1042047 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1042047 ']' 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1042047 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1042047 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1042047' 00:25:41.569 killing process with pid 1042047 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1042047 00:25:41.569 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1042047 00:25:41.827 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.828 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:41.828 00:25:41.828 real 0m36.851s 00:25:41.828 user 1m45.235s 00:25:41.828 sys 0m8.090s 00:25:41.828 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:41.828 23:16:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:41.828 ************************************ 00:25:41.828 END TEST nvmf_host_multipath_status 00:25:41.828 ************************************ 00:25:41.828 23:16:34 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:41.828 23:16:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:41.828 23:16:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:41.828 23:16:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:41.828 ************************************ 00:25:41.828 START TEST nvmf_discovery_remove_ifc 00:25:41.828 ************************************ 00:25:41.828 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:42.087 * Looking for test storage... 00:25:42.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:42.087 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:25:42.087 00:25:42.087 real 0m0.115s 00:25:42.087 user 0m0.050s 00:25:42.087 sys 0m0.072s 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:42.087 23:16:34 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.087 ************************************ 00:25:42.087 END TEST nvmf_discovery_remove_ifc 00:25:42.087 ************************************ 00:25:42.087 23:16:34 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:42.087 23:16:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:42.087 23:16:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:42.087 23:16:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:42.087 ************************************ 00:25:42.087 START TEST nvmf_identify_kernel_target 00:25:42.087 ************************************ 00:25:42.087 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:42.087 * Looking for test storage... 00:25:42.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:42.087 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.087 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.088 23:16:34 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:48.653 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:48.653 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:48.653 Found net devices under 0000:da:00.0: mlx_0_0 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:48.653 Found net devices under 0000:da:00.1: mlx_0_1 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:48.653 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:48.654 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:48.654 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:48.654 altname enp218s0f0np0 00:25:48.654 altname ens818f0np0 00:25:48.654 inet 192.168.100.8/24 scope global mlx_0_0 00:25:48.654 valid_lft forever preferred_lft forever 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:48.654 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:48.654 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:48.654 altname enp218s0f1np1 00:25:48.654 altname ens818f1np1 00:25:48.654 inet 192.168.100.9/24 scope global mlx_0_1 00:25:48.654 valid_lft forever preferred_lft forever 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:48.654 192.168.100.9' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:48.654 192.168.100.9' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:48.654 192.168.100.9' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:25:48.654 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:48.655 23:16:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:51.186 Waiting for block devices as requested 00:25:51.186 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:25:51.186 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:51.186 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:51.443 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:51.443 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:51.443 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:51.443 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:51.702 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:51.702 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:51.702 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:51.702 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:51.960 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:51.960 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:51.960 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:52.218 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:52.218 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:52.218 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:52.477 No valid GPT data, bailing 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:52.477 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:25:52.736 00:25:52.736 Discovery Log Number of Records 2, Generation counter 2 00:25:52.736 =====Discovery Log Entry 0====== 00:25:52.736 trtype: rdma 00:25:52.736 adrfam: ipv4 00:25:52.736 subtype: current discovery subsystem 00:25:52.736 treq: not specified, sq flow control disable supported 00:25:52.736 portid: 1 00:25:52.736 trsvcid: 4420 00:25:52.736 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:52.736 traddr: 192.168.100.8 00:25:52.736 eflags: none 00:25:52.736 rdma_prtype: not specified 00:25:52.736 rdma_qptype: connected 00:25:52.736 rdma_cms: rdma-cm 00:25:52.736 rdma_pkey: 0x0000 00:25:52.736 =====Discovery Log Entry 1====== 00:25:52.736 trtype: rdma 00:25:52.736 adrfam: ipv4 00:25:52.736 subtype: nvme subsystem 00:25:52.736 treq: not specified, sq flow control disable supported 00:25:52.736 portid: 1 00:25:52.736 trsvcid: 4420 00:25:52.736 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:52.736 traddr: 192.168.100.8 00:25:52.736 eflags: none 00:25:52.736 rdma_prtype: not specified 00:25:52.736 rdma_qptype: connected 00:25:52.736 rdma_cms: rdma-cm 00:25:52.736 rdma_pkey: 0x0000 00:25:52.736 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:25:52.736 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:52.736 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.736 ===================================================== 00:25:52.736 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:52.736 ===================================================== 00:25:52.736 Controller Capabilities/Features 00:25:52.736 ================================ 00:25:52.736 Vendor ID: 0000 00:25:52.736 Subsystem Vendor ID: 0000 00:25:52.736 Serial Number: 78a720e627e3711706e1 00:25:52.736 Model Number: Linux 00:25:52.736 Firmware Version: 6.7.0-68 00:25:52.736 Recommended Arb Burst: 0 00:25:52.736 IEEE OUI Identifier: 00 00 00 00:25:52.736 Multi-path I/O 00:25:52.736 May have multiple subsystem ports: No 00:25:52.736 May have multiple controllers: No 00:25:52.736 Associated with SR-IOV VF: No 00:25:52.736 Max Data Transfer Size: Unlimited 00:25:52.736 Max Number of Namespaces: 0 00:25:52.736 Max Number of I/O Queues: 1024 00:25:52.736 NVMe Specification Version (VS): 1.3 00:25:52.736 NVMe Specification Version (Identify): 1.3 00:25:52.736 Maximum Queue Entries: 128 00:25:52.736 Contiguous Queues Required: No 00:25:52.736 Arbitration Mechanisms Supported 00:25:52.736 Weighted Round Robin: Not Supported 00:25:52.736 Vendor Specific: Not Supported 00:25:52.736 Reset Timeout: 7500 ms 00:25:52.736 Doorbell Stride: 4 bytes 00:25:52.736 NVM Subsystem Reset: Not Supported 00:25:52.736 Command Sets Supported 00:25:52.736 NVM Command Set: Supported 00:25:52.736 Boot Partition: Not Supported 00:25:52.736 Memory Page Size Minimum: 4096 bytes 00:25:52.736 Memory Page Size Maximum: 4096 bytes 00:25:52.736 Persistent Memory Region: Not Supported 00:25:52.736 Optional Asynchronous Events Supported 00:25:52.736 Namespace Attribute Notices: Not Supported 00:25:52.736 Firmware Activation Notices: Not Supported 00:25:52.736 ANA Change Notices: Not Supported 00:25:52.736 PLE Aggregate Log Change Notices: Not Supported 00:25:52.736 LBA Status Info Alert Notices: Not Supported 00:25:52.736 EGE Aggregate Log Change Notices: Not Supported 00:25:52.736 Normal NVM Subsystem Shutdown event: Not Supported 00:25:52.736 Zone Descriptor Change Notices: Not Supported 00:25:52.736 Discovery Log Change Notices: Supported 00:25:52.736 Controller Attributes 00:25:52.736 128-bit Host Identifier: Not Supported 00:25:52.736 Non-Operational Permissive Mode: Not Supported 00:25:52.736 NVM Sets: Not Supported 00:25:52.736 Read Recovery Levels: Not Supported 00:25:52.736 Endurance Groups: Not Supported 00:25:52.736 Predictable Latency Mode: Not Supported 00:25:52.736 Traffic Based Keep ALive: Not Supported 00:25:52.736 Namespace Granularity: Not Supported 00:25:52.736 SQ Associations: Not Supported 00:25:52.736 UUID List: Not Supported 00:25:52.736 Multi-Domain Subsystem: Not Supported 00:25:52.736 Fixed Capacity Management: Not Supported 00:25:52.736 Variable Capacity Management: Not Supported 00:25:52.736 Delete Endurance Group: Not Supported 00:25:52.736 Delete NVM Set: Not Supported 00:25:52.736 Extended LBA Formats Supported: Not Supported 00:25:52.736 Flexible Data Placement Supported: Not Supported 00:25:52.736 00:25:52.736 Controller Memory Buffer Support 00:25:52.736 ================================ 00:25:52.736 Supported: No 00:25:52.736 00:25:52.736 Persistent Memory Region Support 00:25:52.736 ================================ 00:25:52.736 Supported: No 00:25:52.736 00:25:52.736 Admin Command Set Attributes 00:25:52.736 ============================ 00:25:52.736 Security Send/Receive: Not Supported 00:25:52.736 Format NVM: Not Supported 00:25:52.736 Firmware Activate/Download: Not Supported 00:25:52.736 Namespace Management: Not Supported 00:25:52.736 Device Self-Test: Not Supported 00:25:52.736 Directives: Not Supported 00:25:52.736 NVMe-MI: Not Supported 00:25:52.736 Virtualization Management: Not Supported 00:25:52.736 Doorbell Buffer Config: Not Supported 00:25:52.736 Get LBA Status Capability: Not Supported 00:25:52.736 Command & Feature Lockdown Capability: Not Supported 00:25:52.736 Abort Command Limit: 1 00:25:52.736 Async Event Request Limit: 1 00:25:52.736 Number of Firmware Slots: N/A 00:25:52.736 Firmware Slot 1 Read-Only: N/A 00:25:52.736 Firmware Activation Without Reset: N/A 00:25:52.736 Multiple Update Detection Support: N/A 00:25:52.736 Firmware Update Granularity: No Information Provided 00:25:52.736 Per-Namespace SMART Log: No 00:25:52.736 Asymmetric Namespace Access Log Page: Not Supported 00:25:52.736 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:52.736 Command Effects Log Page: Not Supported 00:25:52.736 Get Log Page Extended Data: Supported 00:25:52.736 Telemetry Log Pages: Not Supported 00:25:52.736 Persistent Event Log Pages: Not Supported 00:25:52.736 Supported Log Pages Log Page: May Support 00:25:52.737 Commands Supported & Effects Log Page: Not Supported 00:25:52.737 Feature Identifiers & Effects Log Page:May Support 00:25:52.737 NVMe-MI Commands & Effects Log Page: May Support 00:25:52.737 Data Area 4 for Telemetry Log: Not Supported 00:25:52.737 Error Log Page Entries Supported: 1 00:25:52.737 Keep Alive: Not Supported 00:25:52.737 00:25:52.737 NVM Command Set Attributes 00:25:52.737 ========================== 00:25:52.737 Submission Queue Entry Size 00:25:52.737 Max: 1 00:25:52.737 Min: 1 00:25:52.737 Completion Queue Entry Size 00:25:52.737 Max: 1 00:25:52.737 Min: 1 00:25:52.737 Number of Namespaces: 0 00:25:52.737 Compare Command: Not Supported 00:25:52.737 Write Uncorrectable Command: Not Supported 00:25:52.737 Dataset Management Command: Not Supported 00:25:52.737 Write Zeroes Command: Not Supported 00:25:52.737 Set Features Save Field: Not Supported 00:25:52.737 Reservations: Not Supported 00:25:52.737 Timestamp: Not Supported 00:25:52.737 Copy: Not Supported 00:25:52.737 Volatile Write Cache: Not Present 00:25:52.737 Atomic Write Unit (Normal): 1 00:25:52.737 Atomic Write Unit (PFail): 1 00:25:52.737 Atomic Compare & Write Unit: 1 00:25:52.737 Fused Compare & Write: Not Supported 00:25:52.737 Scatter-Gather List 00:25:52.737 SGL Command Set: Supported 00:25:52.737 SGL Keyed: Supported 00:25:52.737 SGL Bit Bucket Descriptor: Not Supported 00:25:52.737 SGL Metadata Pointer: Not Supported 00:25:52.737 Oversized SGL: Not Supported 00:25:52.737 SGL Metadata Address: Not Supported 00:25:52.737 SGL Offset: Supported 00:25:52.737 Transport SGL Data Block: Not Supported 00:25:52.737 Replay Protected Memory Block: Not Supported 00:25:52.737 00:25:52.737 Firmware Slot Information 00:25:52.737 ========================= 00:25:52.737 Active slot: 0 00:25:52.737 00:25:52.737 00:25:52.737 Error Log 00:25:52.737 ========= 00:25:52.737 00:25:52.737 Active Namespaces 00:25:52.737 ================= 00:25:52.737 Discovery Log Page 00:25:52.737 ================== 00:25:52.737 Generation Counter: 2 00:25:52.737 Number of Records: 2 00:25:52.737 Record Format: 0 00:25:52.737 00:25:52.737 Discovery Log Entry 0 00:25:52.737 ---------------------- 00:25:52.737 Transport Type: 1 (RDMA) 00:25:52.737 Address Family: 1 (IPv4) 00:25:52.737 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:52.737 Entry Flags: 00:25:52.737 Duplicate Returned Information: 0 00:25:52.737 Explicit Persistent Connection Support for Discovery: 0 00:25:52.737 Transport Requirements: 00:25:52.737 Secure Channel: Not Specified 00:25:52.737 Port ID: 1 (0x0001) 00:25:52.737 Controller ID: 65535 (0xffff) 00:25:52.737 Admin Max SQ Size: 32 00:25:52.737 Transport Service Identifier: 4420 00:25:52.737 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:52.737 Transport Address: 192.168.100.8 00:25:52.737 Transport Specific Address Subtype - RDMA 00:25:52.737 RDMA QP Service Type: 1 (Reliable Connected) 00:25:52.737 RDMA Provider Type: 1 (No provider specified) 00:25:52.737 RDMA CM Service: 1 (RDMA_CM) 00:25:52.737 Discovery Log Entry 1 00:25:52.737 ---------------------- 00:25:52.737 Transport Type: 1 (RDMA) 00:25:52.737 Address Family: 1 (IPv4) 00:25:52.737 Subsystem Type: 2 (NVM Subsystem) 00:25:52.737 Entry Flags: 00:25:52.737 Duplicate Returned Information: 0 00:25:52.737 Explicit Persistent Connection Support for Discovery: 0 00:25:52.737 Transport Requirements: 00:25:52.737 Secure Channel: Not Specified 00:25:52.737 Port ID: 1 (0x0001) 00:25:52.737 Controller ID: 65535 (0xffff) 00:25:52.737 Admin Max SQ Size: 32 00:25:52.737 Transport Service Identifier: 4420 00:25:52.737 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:52.737 Transport Address: 192.168.100.8 00:25:52.737 Transport Specific Address Subtype - RDMA 00:25:52.737 RDMA QP Service Type: 1 (Reliable Connected) 00:25:52.737 RDMA Provider Type: 1 (No provider specified) 00:25:52.737 RDMA CM Service: 1 (RDMA_CM) 00:25:52.737 23:16:44 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:52.737 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.996 get_feature(0x01) failed 00:25:52.996 get_feature(0x02) failed 00:25:52.996 get_feature(0x04) failed 00:25:52.996 ===================================================== 00:25:52.996 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:25:52.996 ===================================================== 00:25:52.996 Controller Capabilities/Features 00:25:52.996 ================================ 00:25:52.996 Vendor ID: 0000 00:25:52.996 Subsystem Vendor ID: 0000 00:25:52.996 Serial Number: 904dd18a6db7b826c085 00:25:52.996 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:52.996 Firmware Version: 6.7.0-68 00:25:52.996 Recommended Arb Burst: 6 00:25:52.996 IEEE OUI Identifier: 00 00 00 00:25:52.996 Multi-path I/O 00:25:52.996 May have multiple subsystem ports: Yes 00:25:52.996 May have multiple controllers: Yes 00:25:52.996 Associated with SR-IOV VF: No 00:25:52.996 Max Data Transfer Size: 1048576 00:25:52.996 Max Number of Namespaces: 1024 00:25:52.996 Max Number of I/O Queues: 128 00:25:52.996 NVMe Specification Version (VS): 1.3 00:25:52.996 NVMe Specification Version (Identify): 1.3 00:25:52.996 Maximum Queue Entries: 128 00:25:52.996 Contiguous Queues Required: No 00:25:52.996 Arbitration Mechanisms Supported 00:25:52.996 Weighted Round Robin: Not Supported 00:25:52.996 Vendor Specific: Not Supported 00:25:52.996 Reset Timeout: 7500 ms 00:25:52.996 Doorbell Stride: 4 bytes 00:25:52.996 NVM Subsystem Reset: Not Supported 00:25:52.996 Command Sets Supported 00:25:52.996 NVM Command Set: Supported 00:25:52.996 Boot Partition: Not Supported 00:25:52.996 Memory Page Size Minimum: 4096 bytes 00:25:52.996 Memory Page Size Maximum: 4096 bytes 00:25:52.996 Persistent Memory Region: Not Supported 00:25:52.996 Optional Asynchronous Events Supported 00:25:52.996 Namespace Attribute Notices: Supported 00:25:52.996 Firmware Activation Notices: Not Supported 00:25:52.996 ANA Change Notices: Supported 00:25:52.996 PLE Aggregate Log Change Notices: Not Supported 00:25:52.996 LBA Status Info Alert Notices: Not Supported 00:25:52.996 EGE Aggregate Log Change Notices: Not Supported 00:25:52.996 Normal NVM Subsystem Shutdown event: Not Supported 00:25:52.996 Zone Descriptor Change Notices: Not Supported 00:25:52.996 Discovery Log Change Notices: Not Supported 00:25:52.996 Controller Attributes 00:25:52.996 128-bit Host Identifier: Supported 00:25:52.996 Non-Operational Permissive Mode: Not Supported 00:25:52.996 NVM Sets: Not Supported 00:25:52.996 Read Recovery Levels: Not Supported 00:25:52.996 Endurance Groups: Not Supported 00:25:52.996 Predictable Latency Mode: Not Supported 00:25:52.996 Traffic Based Keep ALive: Supported 00:25:52.996 Namespace Granularity: Not Supported 00:25:52.996 SQ Associations: Not Supported 00:25:52.996 UUID List: Not Supported 00:25:52.996 Multi-Domain Subsystem: Not Supported 00:25:52.996 Fixed Capacity Management: Not Supported 00:25:52.996 Variable Capacity Management: Not Supported 00:25:52.996 Delete Endurance Group: Not Supported 00:25:52.996 Delete NVM Set: Not Supported 00:25:52.996 Extended LBA Formats Supported: Not Supported 00:25:52.996 Flexible Data Placement Supported: Not Supported 00:25:52.996 00:25:52.996 Controller Memory Buffer Support 00:25:52.996 ================================ 00:25:52.996 Supported: No 00:25:52.996 00:25:52.996 Persistent Memory Region Support 00:25:52.996 ================================ 00:25:52.996 Supported: No 00:25:52.996 00:25:52.996 Admin Command Set Attributes 00:25:52.996 ============================ 00:25:52.996 Security Send/Receive: Not Supported 00:25:52.996 Format NVM: Not Supported 00:25:52.996 Firmware Activate/Download: Not Supported 00:25:52.996 Namespace Management: Not Supported 00:25:52.996 Device Self-Test: Not Supported 00:25:52.996 Directives: Not Supported 00:25:52.996 NVMe-MI: Not Supported 00:25:52.996 Virtualization Management: Not Supported 00:25:52.996 Doorbell Buffer Config: Not Supported 00:25:52.996 Get LBA Status Capability: Not Supported 00:25:52.996 Command & Feature Lockdown Capability: Not Supported 00:25:52.996 Abort Command Limit: 4 00:25:52.996 Async Event Request Limit: 4 00:25:52.996 Number of Firmware Slots: N/A 00:25:52.996 Firmware Slot 1 Read-Only: N/A 00:25:52.996 Firmware Activation Without Reset: N/A 00:25:52.996 Multiple Update Detection Support: N/A 00:25:52.996 Firmware Update Granularity: No Information Provided 00:25:52.996 Per-Namespace SMART Log: Yes 00:25:52.997 Asymmetric Namespace Access Log Page: Supported 00:25:52.997 ANA Transition Time : 10 sec 00:25:52.997 00:25:52.997 Asymmetric Namespace Access Capabilities 00:25:52.997 ANA Optimized State : Supported 00:25:52.997 ANA Non-Optimized State : Supported 00:25:52.997 ANA Inaccessible State : Supported 00:25:52.997 ANA Persistent Loss State : Supported 00:25:52.997 ANA Change State : Supported 00:25:52.997 ANAGRPID is not changed : No 00:25:52.997 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:52.997 00:25:52.997 ANA Group Identifier Maximum : 128 00:25:52.997 Number of ANA Group Identifiers : 128 00:25:52.997 Max Number of Allowed Namespaces : 1024 00:25:52.997 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:52.997 Command Effects Log Page: Supported 00:25:52.997 Get Log Page Extended Data: Supported 00:25:52.997 Telemetry Log Pages: Not Supported 00:25:52.997 Persistent Event Log Pages: Not Supported 00:25:52.997 Supported Log Pages Log Page: May Support 00:25:52.997 Commands Supported & Effects Log Page: Not Supported 00:25:52.997 Feature Identifiers & Effects Log Page:May Support 00:25:52.997 NVMe-MI Commands & Effects Log Page: May Support 00:25:52.997 Data Area 4 for Telemetry Log: Not Supported 00:25:52.997 Error Log Page Entries Supported: 128 00:25:52.997 Keep Alive: Supported 00:25:52.997 Keep Alive Granularity: 1000 ms 00:25:52.997 00:25:52.997 NVM Command Set Attributes 00:25:52.997 ========================== 00:25:52.997 Submission Queue Entry Size 00:25:52.997 Max: 64 00:25:52.997 Min: 64 00:25:52.997 Completion Queue Entry Size 00:25:52.997 Max: 16 00:25:52.997 Min: 16 00:25:52.997 Number of Namespaces: 1024 00:25:52.997 Compare Command: Not Supported 00:25:52.997 Write Uncorrectable Command: Not Supported 00:25:52.997 Dataset Management Command: Supported 00:25:52.997 Write Zeroes Command: Supported 00:25:52.997 Set Features Save Field: Not Supported 00:25:52.997 Reservations: Not Supported 00:25:52.997 Timestamp: Not Supported 00:25:52.997 Copy: Not Supported 00:25:52.997 Volatile Write Cache: Present 00:25:52.997 Atomic Write Unit (Normal): 1 00:25:52.997 Atomic Write Unit (PFail): 1 00:25:52.997 Atomic Compare & Write Unit: 1 00:25:52.997 Fused Compare & Write: Not Supported 00:25:52.997 Scatter-Gather List 00:25:52.997 SGL Command Set: Supported 00:25:52.997 SGL Keyed: Supported 00:25:52.997 SGL Bit Bucket Descriptor: Not Supported 00:25:52.997 SGL Metadata Pointer: Not Supported 00:25:52.997 Oversized SGL: Not Supported 00:25:52.997 SGL Metadata Address: Not Supported 00:25:52.997 SGL Offset: Supported 00:25:52.997 Transport SGL Data Block: Not Supported 00:25:52.997 Replay Protected Memory Block: Not Supported 00:25:52.997 00:25:52.997 Firmware Slot Information 00:25:52.997 ========================= 00:25:52.997 Active slot: 0 00:25:52.997 00:25:52.997 Asymmetric Namespace Access 00:25:52.997 =========================== 00:25:52.997 Change Count : 0 00:25:52.997 Number of ANA Group Descriptors : 1 00:25:52.997 ANA Group Descriptor : 0 00:25:52.997 ANA Group ID : 1 00:25:52.997 Number of NSID Values : 1 00:25:52.997 Change Count : 0 00:25:52.997 ANA State : 1 00:25:52.997 Namespace Identifier : 1 00:25:52.997 00:25:52.997 Commands Supported and Effects 00:25:52.997 ============================== 00:25:52.997 Admin Commands 00:25:52.997 -------------- 00:25:52.997 Get Log Page (02h): Supported 00:25:52.997 Identify (06h): Supported 00:25:52.997 Abort (08h): Supported 00:25:52.997 Set Features (09h): Supported 00:25:52.997 Get Features (0Ah): Supported 00:25:52.997 Asynchronous Event Request (0Ch): Supported 00:25:52.997 Keep Alive (18h): Supported 00:25:52.997 I/O Commands 00:25:52.997 ------------ 00:25:52.997 Flush (00h): Supported 00:25:52.997 Write (01h): Supported LBA-Change 00:25:52.997 Read (02h): Supported 00:25:52.997 Write Zeroes (08h): Supported LBA-Change 00:25:52.997 Dataset Management (09h): Supported 00:25:52.997 00:25:52.997 Error Log 00:25:52.997 ========= 00:25:52.997 Entry: 0 00:25:52.997 Error Count: 0x3 00:25:52.997 Submission Queue Id: 0x0 00:25:52.997 Command Id: 0x5 00:25:52.997 Phase Bit: 0 00:25:52.997 Status Code: 0x2 00:25:52.997 Status Code Type: 0x0 00:25:52.997 Do Not Retry: 1 00:25:52.997 Error Location: 0x28 00:25:52.997 LBA: 0x0 00:25:52.997 Namespace: 0x0 00:25:52.997 Vendor Log Page: 0x0 00:25:52.997 ----------- 00:25:52.997 Entry: 1 00:25:52.997 Error Count: 0x2 00:25:52.997 Submission Queue Id: 0x0 00:25:52.997 Command Id: 0x5 00:25:52.997 Phase Bit: 0 00:25:52.997 Status Code: 0x2 00:25:52.997 Status Code Type: 0x0 00:25:52.997 Do Not Retry: 1 00:25:52.997 Error Location: 0x28 00:25:52.997 LBA: 0x0 00:25:52.997 Namespace: 0x0 00:25:52.997 Vendor Log Page: 0x0 00:25:52.997 ----------- 00:25:52.997 Entry: 2 00:25:52.997 Error Count: 0x1 00:25:52.997 Submission Queue Id: 0x0 00:25:52.997 Command Id: 0x0 00:25:52.997 Phase Bit: 0 00:25:52.997 Status Code: 0x2 00:25:52.997 Status Code Type: 0x0 00:25:52.997 Do Not Retry: 1 00:25:52.997 Error Location: 0x28 00:25:52.997 LBA: 0x0 00:25:52.997 Namespace: 0x0 00:25:52.997 Vendor Log Page: 0x0 00:25:52.997 00:25:52.997 Number of Queues 00:25:52.997 ================ 00:25:52.997 Number of I/O Submission Queues: 128 00:25:52.997 Number of I/O Completion Queues: 128 00:25:52.997 00:25:52.997 ZNS Specific Controller Data 00:25:52.997 ============================ 00:25:52.997 Zone Append Size Limit: 0 00:25:52.997 00:25:52.997 00:25:52.997 Active Namespaces 00:25:52.997 ================= 00:25:52.997 get_feature(0x05) failed 00:25:52.997 Namespace ID:1 00:25:52.997 Command Set Identifier: NVM (00h) 00:25:52.997 Deallocate: Supported 00:25:52.997 Deallocated/Unwritten Error: Not Supported 00:25:52.997 Deallocated Read Value: Unknown 00:25:52.997 Deallocate in Write Zeroes: Not Supported 00:25:52.997 Deallocated Guard Field: 0xFFFF 00:25:52.997 Flush: Supported 00:25:52.997 Reservation: Not Supported 00:25:52.997 Namespace Sharing Capabilities: Multiple Controllers 00:25:52.997 Size (in LBAs): 3125627568 (1490GiB) 00:25:52.997 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:52.997 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:52.997 UUID: 1961cc07-4b2e-4fb4-824b-83ab3055c667 00:25:52.997 Thin Provisioning: Not Supported 00:25:52.997 Per-NS Atomic Units: Yes 00:25:52.997 Atomic Boundary Size (Normal): 0 00:25:52.997 Atomic Boundary Size (PFail): 0 00:25:52.997 Atomic Boundary Offset: 0 00:25:52.997 NGUID/EUI64 Never Reused: No 00:25:52.997 ANA group ID: 1 00:25:52.997 Namespace Write Protected: No 00:25:52.997 Number of LBA Formats: 1 00:25:52.997 Current LBA Format: LBA Format #00 00:25:52.997 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:52.997 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:52.997 rmmod nvme_rdma 00:25:52.997 rmmod nvme_fabrics 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:52.997 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:52.998 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:52.998 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:52.998 23:16:45 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:56.307 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:56.307 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:57.243 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:25:57.502 00:25:57.502 real 0m15.324s 00:25:57.502 user 0m4.239s 00:25:57.502 sys 0m8.777s 00:25:57.502 23:16:49 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:57.502 23:16:49 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.502 ************************************ 00:25:57.502 END TEST nvmf_identify_kernel_target 00:25:57.502 ************************************ 00:25:57.502 23:16:49 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:57.502 23:16:49 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:57.502 23:16:49 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:57.502 23:16:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:57.502 ************************************ 00:25:57.502 START TEST nvmf_auth_host 00:25:57.502 ************************************ 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:57.502 * Looking for test storage... 00:25:57.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.502 23:16:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.503 23:16:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:04.071 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:04.071 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:04.071 Found net devices under 0000:da:00.0: mlx_0_0 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:04.071 Found net devices under 0000:da:00.1: mlx_0_1 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:04.071 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:04.072 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:04.072 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:04.072 altname enp218s0f0np0 00:26:04.072 altname ens818f0np0 00:26:04.072 inet 192.168.100.8/24 scope global mlx_0_0 00:26:04.072 valid_lft forever preferred_lft forever 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:04.072 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:04.072 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:04.072 altname enp218s0f1np1 00:26:04.072 altname ens818f1np1 00:26:04.072 inet 192.168.100.9/24 scope global mlx_0_1 00:26:04.072 valid_lft forever preferred_lft forever 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:04.072 192.168.100.9' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:04.072 192.168.100.9' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:04.072 192.168.100.9' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1057274 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1057274 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1057274 ']' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:04.072 23:16:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=47277b3e646250939ae6cabf0051e86b 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DwW 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 47277b3e646250939ae6cabf0051e86b 0 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 47277b3e646250939ae6cabf0051e86b 0 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=47277b3e646250939ae6cabf0051e86b 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:04.333 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DwW 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DwW 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DwW 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b8340920668d7cf2485654fd0548346523df23d05079b3791a4540d123b8eb3 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.T1c 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b8340920668d7cf2485654fd0548346523df23d05079b3791a4540d123b8eb3 3 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b8340920668d7cf2485654fd0548346523df23d05079b3791a4540d123b8eb3 3 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b8340920668d7cf2485654fd0548346523df23d05079b3791a4540d123b8eb3 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.T1c 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.T1c 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.T1c 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=32e222fed6da99d56dc69961fbc8f2946c5479d2a8ea6bb0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Xhy 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 32e222fed6da99d56dc69961fbc8f2946c5479d2a8ea6bb0 0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 32e222fed6da99d56dc69961fbc8f2946c5479d2a8ea6bb0 0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=32e222fed6da99d56dc69961fbc8f2946c5479d2a8ea6bb0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Xhy 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Xhy 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Xhy 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cec600766048a3fa7f5c8140c7b921f20dda325b2a7bb8a0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hsw 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cec600766048a3fa7f5c8140c7b921f20dda325b2a7bb8a0 2 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cec600766048a3fa7f5c8140c7b921f20dda325b2a7bb8a0 2 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cec600766048a3fa7f5c8140c7b921f20dda325b2a7bb8a0 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hsw 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hsw 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Hsw 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6b1456e72791c664e3ba5cf64f916598 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OKM 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6b1456e72791c664e3ba5cf64f916598 1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6b1456e72791c664e3ba5cf64f916598 1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6b1456e72791c664e3ba5cf64f916598 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OKM 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OKM 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.OKM 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1c0566c9b751aad8610e837049bb788c 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nED 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1c0566c9b751aad8610e837049bb788c 1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1c0566c9b751aad8610e837049bb788c 1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1c0566c9b751aad8610e837049bb788c 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:04.634 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nED 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nED 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nED 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=06f6ecda4570ac1e5dce285a5f37cc2d028c8c17196ed597 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Usy 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 06f6ecda4570ac1e5dce285a5f37cc2d028c8c17196ed597 2 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 06f6ecda4570ac1e5dce285a5f37cc2d028c8c17196ed597 2 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=06f6ecda4570ac1e5dce285a5f37cc2d028c8c17196ed597 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Usy 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Usy 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Usy 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:04.893 23:16:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=62c72e9e2a91006f3727a3f0ea7534c2 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7ov 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 62c72e9e2a91006f3727a3f0ea7534c2 0 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 62c72e9e2a91006f3727a3f0ea7534c2 0 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=62c72e9e2a91006f3727a3f0ea7534c2 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7ov 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7ov 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7ov 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c4438e70d77e706fe0794227307cdddfb9f6594c88dd297d2bedb5bc814219c 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Iax 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c4438e70d77e706fe0794227307cdddfb9f6594c88dd297d2bedb5bc814219c 3 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c4438e70d77e706fe0794227307cdddfb9f6594c88dd297d2bedb5bc814219c 3 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c4438e70d77e706fe0794227307cdddfb9f6594c88dd297d2bedb5bc814219c 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Iax 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Iax 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Iax 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1057274 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1057274 ']' 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:04.893 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DwW 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.T1c ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T1c 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Xhy 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Hsw ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hsw 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.OKM 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nED ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nED 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Usy 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7ov ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7ov 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Iax 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.152 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:05.153 23:16:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:26:08.437 Waiting for block devices as requested 00:26:08.437 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:26:08.437 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:08.437 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:08.437 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:08.437 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:08.696 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:08.696 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:08.696 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:08.696 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:08.956 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:08.956 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:08.956 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:09.216 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:09.216 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:09.216 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:09.216 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:09.474 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:10.041 No valid GPT data, bailing 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:26:10.041 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:10.042 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:10.042 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:10.042 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:26:10.300 00:26:10.300 Discovery Log Number of Records 2, Generation counter 2 00:26:10.300 =====Discovery Log Entry 0====== 00:26:10.300 trtype: rdma 00:26:10.300 adrfam: ipv4 00:26:10.300 subtype: current discovery subsystem 00:26:10.300 treq: not specified, sq flow control disable supported 00:26:10.300 portid: 1 00:26:10.300 trsvcid: 4420 00:26:10.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:10.300 traddr: 192.168.100.8 00:26:10.300 eflags: none 00:26:10.300 rdma_prtype: not specified 00:26:10.300 rdma_qptype: connected 00:26:10.300 rdma_cms: rdma-cm 00:26:10.300 rdma_pkey: 0x0000 00:26:10.300 =====Discovery Log Entry 1====== 00:26:10.300 trtype: rdma 00:26:10.300 adrfam: ipv4 00:26:10.300 subtype: nvme subsystem 00:26:10.300 treq: not specified, sq flow control disable supported 00:26:10.300 portid: 1 00:26:10.300 trsvcid: 4420 00:26:10.300 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:10.300 traddr: 192.168.100.8 00:26:10.300 eflags: none 00:26:10.300 rdma_prtype: not specified 00:26:10.300 rdma_qptype: connected 00:26:10.300 rdma_cms: rdma-cm 00:26:10.300 rdma_pkey: 0x0000 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.300 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.301 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.559 nvme0n1 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:10.559 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.560 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.818 nvme0n1 00:26:10.818 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.818 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.818 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.818 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.818 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.819 23:17:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.075 nvme0n1 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.075 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.076 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.334 nvme0n1 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.334 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.593 nvme0n1 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.593 23:17:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.852 nvme0n1 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.852 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.111 nvme0n1 00:26:12.111 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.111 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.111 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.111 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.111 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.111 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.370 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.629 nvme0n1 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.629 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 nvme0n1 00:26:12.888 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.888 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.888 23:17:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.888 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.888 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 23:17:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.888 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.888 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.888 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.888 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.888 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.889 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.147 nvme0n1 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.147 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.148 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.406 nvme0n1 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:13.407 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.665 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.666 23:17:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.924 nvme0n1 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.924 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.183 nvme0n1 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.183 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.441 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.442 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.700 nvme0n1 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.700 23:17:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.957 nvme0n1 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:14.957 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.958 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.215 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.474 nvme0n1 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.474 23:17:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.042 nvme0n1 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:16.042 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.043 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.609 nvme0n1 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.609 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.610 23:17:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.868 nvme0n1 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.868 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.126 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.384 nvme0n1 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.384 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.642 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.643 23:17:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.901 nvme0n1 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.901 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.159 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.726 nvme0n1 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.726 23:17:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.293 nvme0n1 00:26:19.293 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.293 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.293 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.293 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.293 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.551 23:17:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.552 23:17:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.552 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.552 23:17:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.136 nvme0n1 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.136 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.137 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.071 nvme0n1 00:26:21.071 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.071 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.071 23:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.071 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.071 23:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.071 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.071 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.071 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.072 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 nvme0n1 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.639 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.640 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.914 nvme0n1 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.914 23:17:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.914 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.208 nvme0n1 00:26:22.208 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.208 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.209 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 nvme0n1 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.468 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.727 nvme0n1 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.727 23:17:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.986 nvme0n1 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.986 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.987 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.245 nvme0n1 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.246 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.504 nvme0n1 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.504 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.765 23:17:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.023 nvme0n1 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:24.023 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.024 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.281 nvme0n1 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.281 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.537 nvme0n1 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.537 23:17:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.794 nvme0n1 00:26:24.794 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.794 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.794 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.794 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.794 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.794 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.051 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.310 nvme0n1 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.310 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.568 nvme0n1 00:26:25.569 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.569 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.569 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.569 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.569 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.569 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.827 23:17:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.086 nvme0n1 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.086 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.345 nvme0n1 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.345 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.603 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.604 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.604 23:17:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.604 23:17:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.604 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.604 23:17:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.862 nvme0n1 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.862 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.863 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.121 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.122 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.122 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.122 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.122 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.122 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.380 nvme0n1 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.380 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.381 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.639 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.639 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.639 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.639 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.639 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.639 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.640 23:17:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.899 nvme0n1 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.899 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.158 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.158 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.158 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.158 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.158 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.158 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.416 nvme0n1 00:26:28.416 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.416 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.416 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.417 23:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.984 nvme0n1 00:26:28.984 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.984 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.984 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.985 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 nvme0n1 00:26:29.552 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.552 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.552 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.552 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.552 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.811 23:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.379 nvme0n1 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.379 23:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.946 nvme0n1 00:26:30.946 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.946 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.946 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.946 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.946 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.946 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.203 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.204 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 nvme0n1 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.770 23:17:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.770 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.337 nvme0n1 00:26:32.337 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.337 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.337 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.337 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.596 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 nvme0n1 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.855 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.856 23:17:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.114 nvme0n1 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.114 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.115 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.373 nvme0n1 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.373 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.374 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.632 nvme0n1 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.632 23:17:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.891 nvme0n1 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.891 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.150 nvme0n1 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.150 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.409 nvme0n1 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.409 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.410 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.668 nvme0n1 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.668 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.927 23:17:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.928 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.928 23:17:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 nvme0n1 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.186 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.187 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.187 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.187 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.446 nvme0n1 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.446 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.704 nvme0n1 00:26:35.704 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.704 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.704 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.704 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.705 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.964 23:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.222 nvme0n1 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.222 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.481 nvme0n1 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.481 23:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.047 nvme0n1 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.047 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.048 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.305 nvme0n1 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.305 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.881 nvme0n1 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.881 23:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.882 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.882 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.882 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.192 nvme0n1 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.192 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.450 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.451 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.709 nvme0n1 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.709 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.967 23:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.225 nvme0n1 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.225 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.226 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.792 nvme0n1 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDcyNzdiM2U2NDYyNTA5MzlhZTZjYWJmMDA1MWU4NmLuhshU: 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWI4MzQwOTIwNjY4ZDdjZjI0ODU2NTRmZDA1NDgzNDY1MjNkZjIzZDA1MDc5YjM3OTFhNDU0MGQxMjNiOGViM3g8G9g=: 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.792 23:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.357 nvme0n1 00:26:40.357 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.357 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.357 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.357 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.357 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.613 23:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.178 nvme0n1 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmIxNDU2ZTcyNzkxYzY2NGUzYmE1Y2Y2NGY5MTY1OTgxXg6B: 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWMwNTY2YzliNzUxYWFkODYxMGU4MzcwNDliYjc4OGP093Fh: 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.178 23:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.742 nvme0n1 00:26:41.742 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.742 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.742 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.742 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.742 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.742 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.000 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.000 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.000 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZmNmVjZGE0NTcwYWMxZTVkY2UyODVhNWYzN2NjMmQwMjhjOGMxNzE5NmVkNTk34AhHhw==: 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: ]] 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJjNzJlOWUyYTkxMDA2ZjM3MjdhM2YwZWE3NTM0YzL5bYjC: 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.001 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.566 nvme0n1 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2M0NDM4ZTcwZDc3ZTcwNmZlMDc5NDIyNzMwN2NkZGRmYjlmNjU5NGM4OGRkMjk3ZDJiZWRiNWJjODE0MjE5Y3yt0Sc=: 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.567 23:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.132 nvme0n1 00:26:43.132 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.132 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.132 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.132 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.132 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.132 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlMjIyZmVkNmRhOTlkNTZkYzY5OTYxZmJjOGYyOTQ2YzU0NzlkMmE4ZWE2YmIwKwz2HA==: 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2VjNjAwNzY2MDQ4YTNmYTdmNWM4MTQwYzdiOTIxZjIwZGRhMzI1YjJhN2JiOGEwxDamag==: 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.391 request: 00:26:43.391 { 00:26:43.391 "name": "nvme0", 00:26:43.391 "trtype": "rdma", 00:26:43.391 "traddr": "192.168.100.8", 00:26:43.391 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:43.391 "adrfam": "ipv4", 00:26:43.391 "trsvcid": "4420", 00:26:43.391 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:43.391 "method": "bdev_nvme_attach_controller", 00:26:43.391 "req_id": 1 00:26:43.391 } 00:26:43.391 Got JSON-RPC error response 00:26:43.391 response: 00:26:43.391 { 00:26:43.391 "code": -5, 00:26:43.391 "message": "Input/output error" 00:26:43.391 } 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.391 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.392 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.650 request: 00:26:43.650 { 00:26:43.650 "name": "nvme0", 00:26:43.650 "trtype": "rdma", 00:26:43.650 "traddr": "192.168.100.8", 00:26:43.650 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:43.650 "adrfam": "ipv4", 00:26:43.650 "trsvcid": "4420", 00:26:43.650 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:43.650 "dhchap_key": "key2", 00:26:43.651 "method": "bdev_nvme_attach_controller", 00:26:43.651 "req_id": 1 00:26:43.651 } 00:26:43.651 Got JSON-RPC error response 00:26:43.651 response: 00:26:43.651 { 00:26:43.651 "code": -5, 00:26:43.651 "message": "Input/output error" 00:26:43.651 } 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.651 request: 00:26:43.651 { 00:26:43.651 "name": "nvme0", 00:26:43.651 "trtype": "rdma", 00:26:43.651 "traddr": "192.168.100.8", 00:26:43.651 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:43.651 "adrfam": "ipv4", 00:26:43.651 "trsvcid": "4420", 00:26:43.651 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:43.651 "dhchap_key": "key1", 00:26:43.651 "dhchap_ctrlr_key": "ckey2", 00:26:43.651 "method": "bdev_nvme_attach_controller", 00:26:43.651 "req_id": 1 00:26:43.651 } 00:26:43.651 Got JSON-RPC error response 00:26:43.651 response: 00:26:43.651 { 00:26:43.651 "code": -5, 00:26:43.651 "message": "Input/output error" 00:26:43.651 } 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:43.651 rmmod nvme_rdma 00:26:43.651 rmmod nvme_fabrics 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1057274 ']' 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1057274 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1057274 ']' 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1057274 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:43.651 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1057274 00:26:43.910 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:43.910 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:43.910 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1057274' 00:26:43.910 killing process with pid 1057274 00:26:43.910 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1057274 00:26:43.910 23:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1057274 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:43.910 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:44.168 23:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:46.700 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:46.700 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:46.959 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:48.336 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:26:48.595 23:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DwW /tmp/spdk.key-null.Xhy /tmp/spdk.key-sha256.OKM /tmp/spdk.key-sha384.Usy /tmp/spdk.key-sha512.Iax /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:26:48.595 23:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:51.882 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:51.882 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:51.882 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:51.882 00:26:51.882 real 0m54.147s 00:26:51.882 user 0m48.796s 00:26:51.882 sys 0m13.047s 00:26:51.882 23:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:51.882 23:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.882 ************************************ 00:26:51.882 END TEST nvmf_auth_host 00:26:51.882 ************************************ 00:26:51.882 23:17:43 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:26:51.882 23:17:43 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:51.882 23:17:43 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:51.882 23:17:43 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:51.882 23:17:43 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:51.882 23:17:43 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:51.882 23:17:43 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:51.882 23:17:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:51.882 ************************************ 00:26:51.882 START TEST nvmf_bdevperf 00:26:51.882 ************************************ 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:51.882 * Looking for test storage... 00:26:51.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.882 23:17:43 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.883 23:17:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:26:58.446 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:26:58.446 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:58.446 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:26:58.447 Found net devices under 0000:da:00.0: mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:26:58.447 Found net devices under 0000:da:00.1: mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:58.447 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:58.447 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:26:58.447 altname enp218s0f0np0 00:26:58.447 altname ens818f0np0 00:26:58.447 inet 192.168.100.8/24 scope global mlx_0_0 00:26:58.447 valid_lft forever preferred_lft forever 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:58.447 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:58.447 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:26:58.447 altname enp218s0f1np1 00:26:58.447 altname ens818f1np1 00:26:58.447 inet 192.168.100.9/24 scope global mlx_0_1 00:26:58.447 valid_lft forever preferred_lft forever 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:58.447 192.168.100.9' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:58.447 192.168.100.9' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:58.447 192.168.100.9' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:58.447 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1071553 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1071553 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1071553 ']' 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:58.448 23:17:50 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.448 [2024-06-07 23:17:50.328684] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:26:58.448 [2024-06-07 23:17:50.328729] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.448 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.448 [2024-06-07 23:17:50.389890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.448 [2024-06-07 23:17:50.461064] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.448 [2024-06-07 23:17:50.461106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.448 [2024-06-07 23:17:50.461113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.448 [2024-06-07 23:17:50.461118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.448 [2024-06-07 23:17:50.461123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.448 [2024-06-07 23:17:50.461240] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.448 [2024-06-07 23:17:50.461337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.448 [2024-06-07 23:17:50.461338] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.015 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.015 [2024-06-07 23:17:51.192395] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16351f0/0x16396e0) succeed. 00:26:59.015 [2024-06-07 23:17:51.201431] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1636790/0x167ad70) succeed. 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.274 Malloc0 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.274 [2024-06-07 23:17:51.344072] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.274 { 00:26:59.274 "params": { 00:26:59.274 "name": "Nvme$subsystem", 00:26:59.274 "trtype": "$TEST_TRANSPORT", 00:26:59.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.274 "adrfam": "ipv4", 00:26:59.274 "trsvcid": "$NVMF_PORT", 00:26:59.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.274 "hdgst": ${hdgst:-false}, 00:26:59.274 "ddgst": ${ddgst:-false} 00:26:59.274 }, 00:26:59.274 "method": "bdev_nvme_attach_controller" 00:26:59.274 } 00:26:59.274 EOF 00:26:59.274 )") 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:59.274 23:17:51 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.274 "params": { 00:26:59.274 "name": "Nvme1", 00:26:59.274 "trtype": "rdma", 00:26:59.274 "traddr": "192.168.100.8", 00:26:59.274 "adrfam": "ipv4", 00:26:59.274 "trsvcid": "4420", 00:26:59.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.274 "hdgst": false, 00:26:59.274 "ddgst": false 00:26:59.274 }, 00:26:59.274 "method": "bdev_nvme_attach_controller" 00:26:59.274 }' 00:26:59.274 [2024-06-07 23:17:51.391042] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:26:59.274 [2024-06-07 23:17:51.391088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071798 ] 00:26:59.274 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.274 [2024-06-07 23:17:51.451375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.274 [2024-06-07 23:17:51.524804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.533 Running I/O for 1 seconds... 00:27:00.469 00:27:00.469 Latency(us) 00:27:00.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.469 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:00.469 Verification LBA range: start 0x0 length 0x4000 00:27:00.469 Nvme1n1 : 1.00 18121.17 70.79 0.00 0.00 7024.65 2371.78 11983.73 00:27:00.469 =================================================================================================================== 00:27:00.469 Total : 18121.17 70.79 0.00 0.00 7024.65 2371.78 11983.73 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1072031 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.728 { 00:27:00.728 "params": { 00:27:00.728 "name": "Nvme$subsystem", 00:27:00.728 "trtype": "$TEST_TRANSPORT", 00:27:00.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.728 "adrfam": "ipv4", 00:27:00.728 "trsvcid": "$NVMF_PORT", 00:27:00.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.728 "hdgst": ${hdgst:-false}, 00:27:00.728 "ddgst": ${ddgst:-false} 00:27:00.728 }, 00:27:00.728 "method": "bdev_nvme_attach_controller" 00:27:00.728 } 00:27:00.728 EOF 00:27:00.728 )") 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:00.728 23:17:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:00.728 "params": { 00:27:00.728 "name": "Nvme1", 00:27:00.728 "trtype": "rdma", 00:27:00.728 "traddr": "192.168.100.8", 00:27:00.728 "adrfam": "ipv4", 00:27:00.728 "trsvcid": "4420", 00:27:00.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:00.728 "hdgst": false, 00:27:00.728 "ddgst": false 00:27:00.728 }, 00:27:00.728 "method": "bdev_nvme_attach_controller" 00:27:00.728 }' 00:27:00.728 [2024-06-07 23:17:52.957694] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:27:00.728 [2024-06-07 23:17:52.957747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072031 ] 00:27:00.728 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.989 [2024-06-07 23:17:53.018475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.989 [2024-06-07 23:17:53.088141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.989 Running I/O for 15 seconds... 00:27:04.322 23:17:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1071553 00:27:04.322 23:17:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:04.890 [2024-06-07 23:17:56.946939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.890 [2024-06-07 23:17:56.946973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.890 [2024-06-07 23:17:56.946989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.890 [2024-06-07 23:17:56.946995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.890 [2024-06-07 23:17:56.947004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.891 [2024-06-07 23:17:56.947374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.891 [2024-06-07 23:17:56.947381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.892 [2024-06-07 23:17:56.947748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.892 [2024-06-07 23:17:56.947755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.947985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.947994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.893 [2024-06-07 23:17:56.948162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.893 [2024-06-07 23:17:56.948168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.894 [2024-06-07 23:17:56.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.894 [2024-06-07 23:17:56.948518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.895 [2024-06-07 23:17:56.948711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.948719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187000 00:27:04.895 [2024-06-07 23:17:56.948726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.950780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.895 [2024-06-07 23:17:56.950791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.895 [2024-06-07 23:17:56.950797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128008 len:8 PRP1 0x0 PRP2 0x0 00:27:04.895 [2024-06-07 23:17:56.950804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.895 [2024-06-07 23:17:56.950842] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:04.895 [2024-06-07 23:17:56.953527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.895 [2024-06-07 23:17:56.967807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:04.895 [2024-06-07 23:17:56.970839] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:04.895 [2024-06-07 23:17:56.970857] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:04.895 [2024-06-07 23:17:56.970863] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:05.830 [2024-06-07 23:17:57.974881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:05.830 [2024-06-07 23:17:57.974902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.830 [2024-06-07 23:17:57.975134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.830 [2024-06-07 23:17:57.975144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.830 [2024-06-07 23:17:57.975152] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:05.830 [2024-06-07 23:17:57.977185] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:05.830 [2024-06-07 23:17:57.978285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.830 [2024-06-07 23:17:57.990594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.830 [2024-06-07 23:17:57.993170] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:05.830 [2024-06-07 23:17:57.993189] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:05.830 [2024-06-07 23:17:57.993195] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:06.761 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1071553 Killed "${NVMF_APP[@]}" "$@" 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1072961 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1072961 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1072961 ']' 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:06.761 23:17:58 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.761 [2024-06-07 23:17:58.960482] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:27:06.761 [2024-06-07 23:17:58.960522] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.761 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.761 [2024-06-07 23:17:58.997321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:06.761 [2024-06-07 23:17:58.997346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.761 [2024-06-07 23:17:58.997522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.761 [2024-06-07 23:17:58.997530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.761 [2024-06-07 23:17:58.997538] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:06.761 [2024-06-07 23:17:59.000285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.761 [2024-06-07 23:17:59.003359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.761 [2024-06-07 23:17:59.005842] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:06.761 [2024-06-07 23:17:59.005860] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:06.761 [2024-06-07 23:17:59.005866] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:06.761 [2024-06-07 23:17:59.021786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:07.019 [2024-06-07 23:17:59.102761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.019 [2024-06-07 23:17:59.102794] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.019 [2024-06-07 23:17:59.102801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.019 [2024-06-07 23:17:59.102807] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.019 [2024-06-07 23:17:59.102812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.019 [2024-06-07 23:17:59.102849] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.019 [2024-06-07 23:17:59.102933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.019 [2024-06-07 23:17:59.102934] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.586 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.586 [2024-06-07 23:17:59.836423] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xba61f0/0xbaa6e0) succeed. 00:27:07.586 [2024-06-07 23:17:59.845601] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xba7790/0xbebd70) succeed. 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.844 Malloc0 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.844 [2024-06-07 23:17:59.990964] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.844 23:17:59 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1072031 00:27:07.844 [2024-06-07 23:18:00.009850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:07.844 [2024-06-07 23:18:00.009876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.844 [2024-06-07 23:18:00.010056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.844 [2024-06-07 23:18:00.010066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.844 [2024-06-07 23:18:00.010074] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:07.844 [2024-06-07 23:18:00.012820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.844 [2024-06-07 23:18:00.021083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.844 [2024-06-07 23:18:00.064092] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:17.814 00:27:17.814 Latency(us) 00:27:17.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.814 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.814 Verification LBA range: start 0x0 length 0x4000 00:27:17.814 Nvme1n1 : 15.00 13211.29 51.61 10446.95 0.00 5391.26 366.69 1038589.56 00:27:17.814 =================================================================================================================== 00:27:17.814 Total : 13211.29 51.61 10446.95 0.00 5391.26 366.69 1038589.56 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:17.814 rmmod nvme_rdma 00:27:17.814 rmmod nvme_fabrics 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1072961 ']' 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1072961 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1072961 ']' 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1072961 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:27:17.814 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1072961 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1072961' 00:27:17.815 killing process with pid 1072961 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1072961 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1072961 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:17.815 00:27:17.815 real 0m25.058s 00:27:17.815 user 1m4.401s 00:27:17.815 sys 0m5.823s 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:17.815 23:18:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.815 ************************************ 00:27:17.815 END TEST nvmf_bdevperf 00:27:17.815 ************************************ 00:27:17.815 23:18:08 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:17.815 23:18:08 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:17.815 23:18:08 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:17.815 23:18:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:17.815 ************************************ 00:27:17.815 START TEST nvmf_target_disconnect 00:27:17.815 ************************************ 00:27:17.815 23:18:08 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:17.815 * Looking for test storage... 00:27:17.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.815 23:18:09 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:27:23.086 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:27:23.086 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:27:23.086 Found net devices under 0000:da:00.0: mlx_0_0 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:27:23.086 Found net devices under 0000:da:00.1: mlx_0_1 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:23.086 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:23.087 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:23.087 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:27:23.087 altname enp218s0f0np0 00:27:23.087 altname ens818f0np0 00:27:23.087 inet 192.168.100.8/24 scope global mlx_0_0 00:27:23.087 valid_lft forever preferred_lft forever 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:23.087 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:23.087 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:27:23.087 altname enp218s0f1np1 00:27:23.087 altname ens818f1np1 00:27:23.087 inet 192.168.100.9/24 scope global mlx_0_1 00:27:23.087 valid_lft forever preferred_lft forever 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:23.087 192.168.100.9' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:23.087 192.168.100.9' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:23.087 192.168.100.9' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:23.087 23:18:14 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.087 ************************************ 00:27:23.087 START TEST nvmf_target_disconnect_tc1 00:27:23.087 ************************************ 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:23.087 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.088 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:23.088 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:23.088 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:23.088 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:27:23.088 23:18:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:23.088 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.088 [2024-06-07 23:18:15.124072] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:23.088 [2024-06-07 23:18:15.124109] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:23.088 [2024-06-07 23:18:15.124116] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:27:24.023 [2024-06-07 23:18:16.128282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:24.023 [2024-06-07 23:18:16.128335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:24.023 [2024-06-07 23:18:16.128360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:27:24.023 [2024-06-07 23:18:16.128414] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:24.023 [2024-06-07 23:18:16.128435] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:24.023 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:27:24.023 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:24.023 Initializing NVMe Controllers 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:24.023 00:27:24.023 real 0m1.121s 00:27:24.023 user 0m0.948s 00:27:24.023 sys 0m0.161s 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.023 ************************************ 00:27:24.023 END TEST nvmf_target_disconnect_tc1 00:27:24.023 ************************************ 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.023 ************************************ 00:27:24.023 START TEST nvmf_target_disconnect_tc2 00:27:24.023 ************************************ 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1078696 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1078696 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1078696 ']' 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:24.023 23:18:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.024 [2024-06-07 23:18:16.264717] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:27:24.024 [2024-06-07 23:18:16.264761] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.024 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.282 [2024-06-07 23:18:16.336474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.282 [2024-06-07 23:18:16.411641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.282 [2024-06-07 23:18:16.411680] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.282 [2024-06-07 23:18:16.411687] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.282 [2024-06-07 23:18:16.411693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.282 [2024-06-07 23:18:16.411700] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.282 [2024-06-07 23:18:16.411811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:27:24.282 [2024-06-07 23:18:16.411922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:27:24.282 [2024-06-07 23:18:16.411955] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.282 [2024-06-07 23:18:16.411956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.847 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.104 Malloc0 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.104 [2024-06-07 23:18:17.155318] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbf4b00/0xc00700) succeed. 00:27:25.104 [2024-06-07 23:18:17.164599] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbf6140/0xca0800) succeed. 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.104 [2024-06-07 23:18:17.306775] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1078939 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:25.104 23:18:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:25.104 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.632 23:18:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1078696 00:27:27.632 23:18:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Write completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 Read completed with error (sct=0, sc=8) 00:27:28.567 starting I/O failed 00:27:28.567 [2024-06-07 23:18:20.496958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:29.134 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1078696 Killed "${NVMF_APP[@]}" "$@" 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1079628 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1079628 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1079628 ']' 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:29.134 23:18:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.134 [2024-06-07 23:18:21.380311] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:27:29.134 [2024-06-07 23:18:21.380352] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.134 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.392 [2024-06-07 23:18:21.455749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Write completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 Read completed with error (sct=0, sc=8) 00:27:29.392 starting I/O failed 00:27:29.392 [2024-06-07 23:18:21.502212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.392 [2024-06-07 23:18:21.503733] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:29.392 [2024-06-07 23:18:21.503749] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:29.392 [2024-06-07 23:18:21.503756] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:29.392 [2024-06-07 23:18:21.532697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.393 [2024-06-07 23:18:21.532725] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.393 [2024-06-07 23:18:21.532732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.393 [2024-06-07 23:18:21.532738] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.393 [2024-06-07 23:18:21.532743] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.393 [2024-06-07 23:18:21.532868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:27:29.393 [2024-06-07 23:18:21.532975] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:27:29.393 [2024-06-07 23:18:21.533082] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:27:29.393 [2024-06-07 23:18:21.533083] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:27:29.955 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:29.955 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:27:29.955 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.956 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:29.956 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 Malloc0 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 [2024-06-07 23:18:22.282693] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b9fb00/0x1bab700) succeed. 00:27:30.213 [2024-06-07 23:18:22.292697] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ba1140/0x1c4b800) succeed. 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 [2024-06-07 23:18:22.435095] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.213 23:18:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1078939 00:27:30.471 [2024-06-07 23:18:22.508007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.519301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.519357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.519375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.519382] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.519388] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.529611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.539251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.539294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.539310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.539316] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.539322] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.549677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.559230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.559266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.559281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.559288] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.559293] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.569715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.579385] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.579425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.579440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.579447] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.579453] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.589833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.599220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.599257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.599272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.599279] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.599285] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.609869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.619480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.619516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.619530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.619537] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.619543] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.629904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.639440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.639473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.639491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.639498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.639504] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.650046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.659578] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.659619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.659637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.659644] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.659651] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.670114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.679445] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.679491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.471 [2024-06-07 23:18:22.679505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.471 [2024-06-07 23:18:22.679512] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.471 [2024-06-07 23:18:22.679519] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.471 [2024-06-07 23:18:22.689930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.471 qpair failed and we were unable to recover it. 00:27:30.471 [2024-06-07 23:18:22.699639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.471 [2024-06-07 23:18:22.699682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.472 [2024-06-07 23:18:22.699696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.472 [2024-06-07 23:18:22.699703] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.472 [2024-06-07 23:18:22.699708] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.472 [2024-06-07 23:18:22.710257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.472 qpair failed and we were unable to recover it. 00:27:30.472 [2024-06-07 23:18:22.719670] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.472 [2024-06-07 23:18:22.719708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.472 [2024-06-07 23:18:22.719722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.472 [2024-06-07 23:18:22.719729] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.472 [2024-06-07 23:18:22.719734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.472 [2024-06-07 23:18:22.730131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.472 qpair failed and we were unable to recover it. 00:27:30.472 [2024-06-07 23:18:22.739745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.472 [2024-06-07 23:18:22.739783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.472 [2024-06-07 23:18:22.739797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.472 [2024-06-07 23:18:22.739804] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.472 [2024-06-07 23:18:22.739809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.731 [2024-06-07 23:18:22.750223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-06-07 23:18:22.759768] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.731 [2024-06-07 23:18:22.759809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.731 [2024-06-07 23:18:22.759823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.731 [2024-06-07 23:18:22.759829] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.731 [2024-06-07 23:18:22.759835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.731 [2024-06-07 23:18:22.770225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-06-07 23:18:22.779827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.731 [2024-06-07 23:18:22.779859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.731 [2024-06-07 23:18:22.779873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.731 [2024-06-07 23:18:22.779880] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.731 [2024-06-07 23:18:22.779886] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.731 [2024-06-07 23:18:22.790363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.731 qpair failed and we were unable to recover it. 00:27:30.731 [2024-06-07 23:18:22.799967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.799997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.800016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.800023] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.800029] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.810421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.819972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.820007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.820034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.820041] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.820046] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.830439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.839988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.840034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.840049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.840055] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.840061] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.850481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.860031] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.860069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.860083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.860090] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.860095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.870674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.880142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.880180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.880195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.880201] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.880207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.890804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.900232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.900270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.900284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.900290] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.900301] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.910676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.920315] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.920349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.920363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.920370] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.920375] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.930651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.940231] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.940270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.940284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.940290] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.940296] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.950882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.960460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.960499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.960513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.960519] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.960524] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.971002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:22.980546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:22.980585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:22.980598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:22.980605] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:22.980610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:30.732 [2024-06-07 23:18:22.990905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.732 qpair failed and we were unable to recover it. 00:27:30.732 [2024-06-07 23:18:23.000457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.732 [2024-06-07 23:18:23.000496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.732 [2024-06-07 23:18:23.000509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.732 [2024-06-07 23:18:23.000516] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.732 [2024-06-07 23:18:23.000522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.011012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.020587] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.020626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.020641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.020647] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.020653] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.030962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.040556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.040587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.040601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.040607] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.040613] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.051235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.060761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.060799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.060813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.060819] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.060825] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.071256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.080800] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.080838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.080856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.080862] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.080868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.091180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.100797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.100832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.100845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.100851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.100857] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.111266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.120973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.121007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.121025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.121031] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.121037] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.131279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.141004] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.141045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.141059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.141065] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.141071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.151208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.161142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.161178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.161191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.161197] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.161203] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.171549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.181187] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.181225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.181239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.181246] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.181251] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.191468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.201099] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.002 [2024-06-07 23:18:23.201131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.002 [2024-06-07 23:18:23.201144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.002 [2024-06-07 23:18:23.201151] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.002 [2024-06-07 23:18:23.201156] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.002 [2024-06-07 23:18:23.211498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.002 qpair failed and we were unable to recover it. 00:27:31.002 [2024-06-07 23:18:23.221267] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.003 [2024-06-07 23:18:23.221305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.003 [2024-06-07 23:18:23.221318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.003 [2024-06-07 23:18:23.221324] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.003 [2024-06-07 23:18:23.221330] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.003 [2024-06-07 23:18:23.231664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.003 qpair failed and we were unable to recover it. 00:27:31.003 [2024-06-07 23:18:23.241368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.003 [2024-06-07 23:18:23.241403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.003 [2024-06-07 23:18:23.241416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.003 [2024-06-07 23:18:23.241423] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.003 [2024-06-07 23:18:23.241429] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.003 [2024-06-07 23:18:23.251571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.003 qpair failed and we were unable to recover it. 00:27:31.003 [2024-06-07 23:18:23.261335] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.003 [2024-06-07 23:18:23.261373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.003 [2024-06-07 23:18:23.261390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.003 [2024-06-07 23:18:23.261396] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.003 [2024-06-07 23:18:23.261402] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.003 [2024-06-07 23:18:23.271659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.003 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.281408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.281445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.276 [2024-06-07 23:18:23.281460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.276 [2024-06-07 23:18:23.281466] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.276 [2024-06-07 23:18:23.281472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.276 [2024-06-07 23:18:23.291935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.301448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.301487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.276 [2024-06-07 23:18:23.301500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.276 [2024-06-07 23:18:23.301507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.276 [2024-06-07 23:18:23.301513] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.276 [2024-06-07 23:18:23.311889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.321498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.321536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.276 [2024-06-07 23:18:23.321549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.276 [2024-06-07 23:18:23.321555] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.276 [2024-06-07 23:18:23.321561] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.276 [2024-06-07 23:18:23.331971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.341425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.341462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.276 [2024-06-07 23:18:23.341475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.276 [2024-06-07 23:18:23.341482] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.276 [2024-06-07 23:18:23.341490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.276 [2024-06-07 23:18:23.352114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.361648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.361687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.276 [2024-06-07 23:18:23.361701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.276 [2024-06-07 23:18:23.361707] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.276 [2024-06-07 23:18:23.361713] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.276 [2024-06-07 23:18:23.372033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.381663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.381702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.276 [2024-06-07 23:18:23.381716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.276 [2024-06-07 23:18:23.381722] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.276 [2024-06-07 23:18:23.381728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.276 [2024-06-07 23:18:23.392117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-06-07 23:18:23.401645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.276 [2024-06-07 23:18:23.401681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.401696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.401702] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.401708] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.412147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.421785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.421826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.421840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.421846] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.421852] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.432229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.441949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.441981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.441995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.442001] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.442007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.452384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.461739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.461776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.461790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.461797] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.461802] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.472192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.481789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.481822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.481837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.481843] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.481849] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.492060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.501834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.501874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.501888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.501894] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.501900] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.512094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.521933] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.521964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.521981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.521987] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.521993] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.532169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-06-07 23:18:23.542036] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.277 [2024-06-07 23:18:23.542074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.277 [2024-06-07 23:18:23.542087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.277 [2024-06-07 23:18:23.542094] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.277 [2024-06-07 23:18:23.542100] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.277 [2024-06-07 23:18:23.552423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.562003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.562050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.562064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.562071] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.562076] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.572531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.582071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.582107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.582122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.582128] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.582134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.592476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.602186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.602219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.602232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.602239] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.602244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.612495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.622249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.622291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.622304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.622311] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.622316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.632733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.642353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.642398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.642412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.642419] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.642424] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.652703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.662449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.662489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.662502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.662508] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.662514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.672604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.682431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.682469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.682483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.682489] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.682495] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.692846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.702424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.702463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.702479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.702485] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.702491] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.712941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.722549] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.722586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.722600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.722606] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.722612] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.732951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.742624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.742663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.742677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.742684] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.742690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.752802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.762636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.762677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.762691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.762698] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.762704] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.773044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.535 [2024-06-07 23:18:23.782724] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.535 [2024-06-07 23:18:23.782764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.535 [2024-06-07 23:18:23.782778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.535 [2024-06-07 23:18:23.782785] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.535 [2024-06-07 23:18:23.782794] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.535 [2024-06-07 23:18:23.793170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.535 qpair failed and we were unable to recover it. 00:27:31.536 [2024-06-07 23:18:23.802863] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.536 [2024-06-07 23:18:23.802902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.536 [2024-06-07 23:18:23.802916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.536 [2024-06-07 23:18:23.802922] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.536 [2024-06-07 23:18:23.802928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.793 [2024-06-07 23:18:23.813223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.793 qpair failed and we were unable to recover it. 00:27:31.793 [2024-06-07 23:18:23.822833] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.793 [2024-06-07 23:18:23.822870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.793 [2024-06-07 23:18:23.822883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.793 [2024-06-07 23:18:23.822890] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.793 [2024-06-07 23:18:23.822896] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.793 [2024-06-07 23:18:23.833162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.793 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.842930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.842964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.842977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.842984] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.842989] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.853261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.862974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.863016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.863030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.863036] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.863042] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.873402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.883048] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.883089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.883104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.883110] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.883116] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.893443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.903158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.903195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.903209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.903215] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.903220] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.913493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.923117] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.923153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.923166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.923172] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.923178] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.933489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.943281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.943316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.943329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.943336] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.943341] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.953623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.963307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.963348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.963365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.963372] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.963377] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.973702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:23.983283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:23.983320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:23.983334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:23.983340] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:23.983346] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:23.993571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:24.003382] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:24.003417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:24.003431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:24.003437] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:24.003443] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:24.013834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:24.023474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:24.023513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:24.023526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:24.023532] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:24.023538] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:24.033888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:24.043418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:24.043453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:24.043466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:24.043472] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:24.043478] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:31.794 [2024-06-07 23:18:24.054013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.794 qpair failed and we were unable to recover it. 00:27:31.794 [2024-06-07 23:18:24.063523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.794 [2024-06-07 23:18:24.063556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.794 [2024-06-07 23:18:24.063570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.794 [2024-06-07 23:18:24.063577] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.794 [2024-06-07 23:18:24.063582] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.073989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.083537] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.083576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.083591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.083597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.083603] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.094012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.103611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.103648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.103661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.103668] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.103673] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.113902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.123806] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.123852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.123877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.123884] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.123890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.134066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.143757] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.143790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.143806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.143813] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.143818] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.154295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.163816] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.163854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.163867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.163874] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.163880] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.174186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.183924] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.183961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.183976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.183982] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.183988] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.194275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.204000] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.204045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.204059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.204065] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.204071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.214414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.052 [2024-06-07 23:18:24.224072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.052 [2024-06-07 23:18:24.224105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.052 [2024-06-07 23:18:24.224119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.052 [2024-06-07 23:18:24.224125] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.052 [2024-06-07 23:18:24.224136] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.052 [2024-06-07 23:18:24.234438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.052 qpair failed and we were unable to recover it. 00:27:32.053 [2024-06-07 23:18:24.244107] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.053 [2024-06-07 23:18:24.244143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.053 [2024-06-07 23:18:24.244157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.053 [2024-06-07 23:18:24.244163] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.053 [2024-06-07 23:18:24.244169] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.053 [2024-06-07 23:18:24.254644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.053 qpair failed and we were unable to recover it. 00:27:32.053 [2024-06-07 23:18:24.264228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.053 [2024-06-07 23:18:24.264264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.053 [2024-06-07 23:18:24.264277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.053 [2024-06-07 23:18:24.264284] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.053 [2024-06-07 23:18:24.264289] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.053 [2024-06-07 23:18:24.274327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.053 qpair failed and we were unable to recover it. 00:27:32.053 [2024-06-07 23:18:24.284222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.053 [2024-06-07 23:18:24.284264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.053 [2024-06-07 23:18:24.284278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.053 [2024-06-07 23:18:24.284284] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.053 [2024-06-07 23:18:24.284290] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.053 [2024-06-07 23:18:24.294594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.053 qpair failed and we were unable to recover it. 00:27:32.053 [2024-06-07 23:18:24.304338] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.053 [2024-06-07 23:18:24.304373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.053 [2024-06-07 23:18:24.304387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.053 [2024-06-07 23:18:24.304393] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.053 [2024-06-07 23:18:24.304399] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.053 [2024-06-07 23:18:24.314677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.053 qpair failed and we were unable to recover it. 00:27:32.053 [2024-06-07 23:18:24.324366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.053 [2024-06-07 23:18:24.324409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.053 [2024-06-07 23:18:24.324423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.053 [2024-06-07 23:18:24.324429] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.053 [2024-06-07 23:18:24.324435] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.310 [2024-06-07 23:18:24.334687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.310 qpair failed and we were unable to recover it. 00:27:32.310 [2024-06-07 23:18:24.344703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.310 [2024-06-07 23:18:24.344740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.310 [2024-06-07 23:18:24.344754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.310 [2024-06-07 23:18:24.344760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.310 [2024-06-07 23:18:24.344766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.310 [2024-06-07 23:18:24.354702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.310 qpair failed and we were unable to recover it. 00:27:32.310 [2024-06-07 23:18:24.364509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.310 [2024-06-07 23:18:24.364547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.310 [2024-06-07 23:18:24.364561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.310 [2024-06-07 23:18:24.364568] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.310 [2024-06-07 23:18:24.364574] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.310 [2024-06-07 23:18:24.374932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.310 qpair failed and we were unable to recover it. 00:27:32.310 [2024-06-07 23:18:24.384530] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.310 [2024-06-07 23:18:24.384568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.310 [2024-06-07 23:18:24.384583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.310 [2024-06-07 23:18:24.384589] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.310 [2024-06-07 23:18:24.384595] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.310 [2024-06-07 23:18:24.394910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.310 qpair failed and we were unable to recover it. 00:27:32.310 [2024-06-07 23:18:24.404659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.310 [2024-06-07 23:18:24.404697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.310 [2024-06-07 23:18:24.404714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.310 [2024-06-07 23:18:24.404720] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.404726] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.414994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.424692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.424727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.424741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.424747] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.424752] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.434800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.444742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.444785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.444800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.444807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.444812] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.455218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.464750] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.464782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.464796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.464803] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.464809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.475071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.484827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.484863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.484877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.484884] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.484890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.494972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.504894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.504930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.504944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.504950] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.504956] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.515282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.524969] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.525007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.525025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.525032] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.525038] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.535351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.544944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.544983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.544997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.545003] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.545013] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.555584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.564893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.564930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.564943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.564950] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.564956] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.311 [2024-06-07 23:18:24.575289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.311 qpair failed and we were unable to recover it. 00:27:32.311 [2024-06-07 23:18:24.585220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.311 [2024-06-07 23:18:24.585255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.311 [2024-06-07 23:18:24.585284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.311 [2024-06-07 23:18:24.585290] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.311 [2024-06-07 23:18:24.585296] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.568 [2024-06-07 23:18:24.595656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-06-07 23:18:24.605310] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-06-07 23:18:24.605351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-06-07 23:18:24.605365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-06-07 23:18:24.605371] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-06-07 23:18:24.605376] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.568 [2024-06-07 23:18:24.615697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-06-07 23:18:24.625253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-06-07 23:18:24.625291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-06-07 23:18:24.625304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-06-07 23:18:24.625311] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-06-07 23:18:24.625316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.568 [2024-06-07 23:18:24.635749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-06-07 23:18:24.645345] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-06-07 23:18:24.645387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-06-07 23:18:24.645401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-06-07 23:18:24.645407] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-06-07 23:18:24.645413] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.568 [2024-06-07 23:18:24.655739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-06-07 23:18:24.665470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-06-07 23:18:24.665508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-06-07 23:18:24.665522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-06-07 23:18:24.665528] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-06-07 23:18:24.665537] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.568 [2024-06-07 23:18:24.675623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-06-07 23:18:24.685552] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-06-07 23:18:24.685590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-06-07 23:18:24.685604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.685610] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.685616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.695999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.705627] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.705660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.705673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.705680] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.705685] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.715829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.725657] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.725694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.725707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.725714] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.725719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.736038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.745836] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.745874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.745887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.745893] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.745899] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.756054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.765690] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.765733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.765747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.765753] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.765759] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.775981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.785747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.785785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.785801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.785807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.785813] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.796096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.805905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.805944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.805958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.805964] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.805970] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.816004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-06-07 23:18:24.826002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-06-07 23:18:24.826048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-06-07 23:18:24.826063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-06-07 23:18:24.826069] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-06-07 23:18:24.826075] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.569 [2024-06-07 23:18:24.836185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.826 [2024-06-07 23:18:24.845864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.826 [2024-06-07 23:18:24.845901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.826 [2024-06-07 23:18:24.845918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.826 [2024-06-07 23:18:24.845925] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.826 [2024-06-07 23:18:24.845930] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.826 [2024-06-07 23:18:24.856303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-06-07 23:18:24.866027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.826 [2024-06-07 23:18:24.866063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.826 [2024-06-07 23:18:24.866078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.826 [2024-06-07 23:18:24.866084] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.826 [2024-06-07 23:18:24.866090] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.826 [2024-06-07 23:18:24.876409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-06-07 23:18:24.886040] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.826 [2024-06-07 23:18:24.886075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.826 [2024-06-07 23:18:24.886088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.826 [2024-06-07 23:18:24.886095] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.826 [2024-06-07 23:18:24.886100] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.826 [2024-06-07 23:18:24.896323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-06-07 23:18:24.906207] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.826 [2024-06-07 23:18:24.906246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.826 [2024-06-07 23:18:24.906259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.826 [2024-06-07 23:18:24.906265] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.826 [2024-06-07 23:18:24.906271] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.826 [2024-06-07 23:18:24.916279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.826 qpair failed and we were unable to recover it. 00:27:32.826 [2024-06-07 23:18:24.926192] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.826 [2024-06-07 23:18:24.926231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.826 [2024-06-07 23:18:24.926245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.826 [2024-06-07 23:18:24.926251] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:24.926258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:24.936552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:24.946249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:24.946290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:24.946304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:24.946310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:24.946316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:24.956628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:24.966328] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:24.966364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:24.966377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:24.966384] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:24.966389] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:24.976485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:24.986347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:24.986385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:24.986399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:24.986406] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:24.986411] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:24.996693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:25.006363] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:25.006403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:25.006417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:25.006423] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:25.006429] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:25.016741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:25.026422] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:25.026454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:25.026471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:25.026478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:25.026483] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:25.036809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:25.046487] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:25.046526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:25.046540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:25.046547] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:25.046552] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:25.056836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:25.066504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:25.066541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:25.066555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:25.066561] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:25.066567] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:25.076986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:32.827 [2024-06-07 23:18:25.086595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.827 [2024-06-07 23:18:25.086639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.827 [2024-06-07 23:18:25.086654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.827 [2024-06-07 23:18:25.086660] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.827 [2024-06-07 23:18:25.086666] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:32.827 [2024-06-07 23:18:25.096979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.827 qpair failed and we were unable to recover it. 00:27:33.084 [2024-06-07 23:18:25.106609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.084 [2024-06-07 23:18:25.106648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.084 [2024-06-07 23:18:25.106662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.084 [2024-06-07 23:18:25.106668] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.084 [2024-06-07 23:18:25.106677] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.084 [2024-06-07 23:18:25.116893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.084 qpair failed and we were unable to recover it. 00:27:33.084 [2024-06-07 23:18:25.126807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.126841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.126855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.126862] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.126867] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.137116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.146645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.146681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.146694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.146700] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.146706] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.157134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.166759] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.166801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.166814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.166821] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.166827] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.177304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.186903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.186940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.186954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.186961] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.186966] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.197305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.206917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.206957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.206971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.206977] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.206983] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.217405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.227102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.227141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.227154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.227161] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.227166] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.237257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.247029] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.247073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.247087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.247094] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.247099] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.257551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.267129] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.267165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.267179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.267186] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.267192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.277339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.287159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.287196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.287214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.287220] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.287226] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.297626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.307254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.307292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.307306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.307313] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.307318] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.317722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.327302] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.327340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.327354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.327360] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.327366] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.337730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.085 [2024-06-07 23:18:25.347462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.085 [2024-06-07 23:18:25.347499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.085 [2024-06-07 23:18:25.347513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.085 [2024-06-07 23:18:25.347519] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.085 [2024-06-07 23:18:25.347525] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.085 [2024-06-07 23:18:25.357825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.085 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.367448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.367487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.367501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.367508] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.367514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.377953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.387439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.387476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.387490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.387496] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.387502] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.397952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.407597] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.407632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.407646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.407652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.407658] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.418078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.427680] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.427720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.427733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.427740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.427745] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.437930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.447645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.447681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.447695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.447701] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.447707] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.457995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.467639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.467678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.467696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.467703] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.467708] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.477964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.487799] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.487836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.487849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.487856] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.487862] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.498385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.507828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.507869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.507882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.507888] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.507894] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.518327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.527812] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.527851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.527865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.527871] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.527876] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.538359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.547953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.343 [2024-06-07 23:18:25.547990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.343 [2024-06-07 23:18:25.548004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.343 [2024-06-07 23:18:25.548015] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.343 [2024-06-07 23:18:25.548026] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.343 [2024-06-07 23:18:25.558569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.343 qpair failed and we were unable to recover it. 00:27:33.343 [2024-06-07 23:18:25.567994] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.344 [2024-06-07 23:18:25.568039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.344 [2024-06-07 23:18:25.568053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.344 [2024-06-07 23:18:25.568059] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.344 [2024-06-07 23:18:25.568065] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.344 [2024-06-07 23:18:25.578420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.344 qpair failed and we were unable to recover it. 00:27:33.344 [2024-06-07 23:18:25.588062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.344 [2024-06-07 23:18:25.588100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.344 [2024-06-07 23:18:25.588115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.344 [2024-06-07 23:18:25.588121] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.344 [2024-06-07 23:18:25.588128] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.344 [2024-06-07 23:18:25.598519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.344 qpair failed and we were unable to recover it. 00:27:33.344 [2024-06-07 23:18:25.608064] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.344 [2024-06-07 23:18:25.608100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.344 [2024-06-07 23:18:25.608115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.344 [2024-06-07 23:18:25.608121] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.344 [2024-06-07 23:18:25.608127] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.344 [2024-06-07 23:18:25.618451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.344 qpair failed and we were unable to recover it. 00:27:33.601 [2024-06-07 23:18:25.628104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.601 [2024-06-07 23:18:25.628141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.601 [2024-06-07 23:18:25.628155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.601 [2024-06-07 23:18:25.628161] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.601 [2024-06-07 23:18:25.628167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.601 [2024-06-07 23:18:25.638628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.601 qpair failed and we were unable to recover it. 00:27:33.601 [2024-06-07 23:18:25.648145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.601 [2024-06-07 23:18:25.648180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.601 [2024-06-07 23:18:25.648194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.601 [2024-06-07 23:18:25.648200] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.601 [2024-06-07 23:18:25.648205] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.601 [2024-06-07 23:18:25.658645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.601 qpair failed and we were unable to recover it. 00:27:33.601 [2024-06-07 23:18:25.668340] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.601 [2024-06-07 23:18:25.668379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.601 [2024-06-07 23:18:25.668393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.601 [2024-06-07 23:18:25.668399] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.601 [2024-06-07 23:18:25.668405] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.601 [2024-06-07 23:18:25.678784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.601 qpair failed and we were unable to recover it. 00:27:33.601 [2024-06-07 23:18:25.688304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.601 [2024-06-07 23:18:25.688343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.601 [2024-06-07 23:18:25.688357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.601 [2024-06-07 23:18:25.688363] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.601 [2024-06-07 23:18:25.688369] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.601 [2024-06-07 23:18:25.698667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.601 qpair failed and we were unable to recover it. 00:27:33.601 [2024-06-07 23:18:25.708336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.601 [2024-06-07 23:18:25.708373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.601 [2024-06-07 23:18:25.708387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.601 [2024-06-07 23:18:25.708393] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.601 [2024-06-07 23:18:25.708399] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.601 [2024-06-07 23:18:25.718967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.601 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.728369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.728410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.728427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.728433] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.728438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.738843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.748514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.748553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.748566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.748573] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.748578] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.758793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.768493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.768525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.768538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.768545] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.768550] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.778819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.788579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.788617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.788632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.788638] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.788644] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.799183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.808671] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.808714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.808728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.808735] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.808741] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.818947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.828690] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.828729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.828742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.828749] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.828754] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.839077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.848775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.848812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.848826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.848832] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.848838] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.602 [2024-06-07 23:18:25.859164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.602 qpair failed and we were unable to recover it. 00:27:33.602 [2024-06-07 23:18:25.868840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.602 [2024-06-07 23:18:25.868878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.602 [2024-06-07 23:18:25.868892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.602 [2024-06-07 23:18:25.868898] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.602 [2024-06-07 23:18:25.868904] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.859 [2024-06-07 23:18:25.879219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:25.888831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:25.888869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:25.888884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:25.888890] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:25.888896] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:25.899229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:25.908934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:25.908971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:25.908988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:25.908995] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:25.909001] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:25.919413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:25.928939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:25.928971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:25.928984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:25.928990] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:25.928996] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:25.939359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:25.948997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:25.949039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:25.949053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:25.949059] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:25.949064] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:25.959493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:25.969169] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:25.969211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:25.969230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:25.969236] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:25.969242] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:25.979515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:25.989211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:25.989247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:25.989262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:25.989269] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:25.989278] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:25.999532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:26.009276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:26.009313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:26.009327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:26.009333] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:26.009339] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:26.019608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:26.029341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:26.029378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:26.029392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:26.029398] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:26.029404] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:26.039753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:26.049400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:26.049440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:26.049454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:26.049460] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.860 [2024-06-07 23:18:26.049466] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.860 [2024-06-07 23:18:26.059938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.860 qpair failed and we were unable to recover it. 00:27:33.860 [2024-06-07 23:18:26.069444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.860 [2024-06-07 23:18:26.069482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.860 [2024-06-07 23:18:26.069497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.860 [2024-06-07 23:18:26.069503] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.861 [2024-06-07 23:18:26.069509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.861 [2024-06-07 23:18:26.079761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.861 qpair failed and we were unable to recover it. 00:27:33.861 [2024-06-07 23:18:26.089486] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.861 [2024-06-07 23:18:26.089529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.861 [2024-06-07 23:18:26.089544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.861 [2024-06-07 23:18:26.089551] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.861 [2024-06-07 23:18:26.089556] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.861 [2024-06-07 23:18:26.099897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.861 qpair failed and we were unable to recover it. 00:27:33.861 [2024-06-07 23:18:26.109534] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.861 [2024-06-07 23:18:26.109571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.861 [2024-06-07 23:18:26.109585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.861 [2024-06-07 23:18:26.109591] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.861 [2024-06-07 23:18:26.109597] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:33.861 [2024-06-07 23:18:26.119995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:33.861 qpair failed and we were unable to recover it. 00:27:33.861 [2024-06-07 23:18:26.129629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.861 [2024-06-07 23:18:26.129666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.861 [2024-06-07 23:18:26.129682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.861 [2024-06-07 23:18:26.129688] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.861 [2024-06-07 23:18:26.129694] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.118 [2024-06-07 23:18:26.139964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-06-07 23:18:26.149608] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.118 [2024-06-07 23:18:26.149644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.118 [2024-06-07 23:18:26.149658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.118 [2024-06-07 23:18:26.149665] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.118 [2024-06-07 23:18:26.149670] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.118 [2024-06-07 23:18:26.160185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-06-07 23:18:26.169710] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.118 [2024-06-07 23:18:26.169746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.118 [2024-06-07 23:18:26.169764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.118 [2024-06-07 23:18:26.169770] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.118 [2024-06-07 23:18:26.169776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.118 [2024-06-07 23:18:26.180162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.118 qpair failed and we were unable to recover it. 00:27:34.118 [2024-06-07 23:18:26.189861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.118 [2024-06-07 23:18:26.189901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.189915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.189922] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.189928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.200260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.209820] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.209862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.209876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.209882] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.209888] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.220272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.229862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.229900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.229914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.229920] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.229926] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.240439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.250047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.250081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.250094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.250100] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.250106] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.260318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.270096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.270134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.270148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.270155] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.270160] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.280373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.290018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.290061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.290076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.290083] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.290088] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.300491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.310194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.310229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.310259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.310266] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.310272] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.320455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.330205] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.330241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.330255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.330261] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.330267] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.340423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.350270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.350309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.350326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.350332] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.350337] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.360668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.370448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.370489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.370503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.370509] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.370516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.119 [2024-06-07 23:18:26.380487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.119 qpair failed and we were unable to recover it. 00:27:34.119 [2024-06-07 23:18:26.390408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.119 [2024-06-07 23:18:26.390446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.119 [2024-06-07 23:18:26.390461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.119 [2024-06-07 23:18:26.390467] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.119 [2024-06-07 23:18:26.390473] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.376 [2024-06-07 23:18:26.400629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.376 qpair failed and we were unable to recover it. 00:27:34.376 [2024-06-07 23:18:26.410343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.376 [2024-06-07 23:18:26.410380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.376 [2024-06-07 23:18:26.410393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.376 [2024-06-07 23:18:26.410400] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.376 [2024-06-07 23:18:26.410405] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.376 [2024-06-07 23:18:26.420788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.376 qpair failed and we were unable to recover it. 00:27:34.376 [2024-06-07 23:18:26.430669] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.376 [2024-06-07 23:18:26.430706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.376 [2024-06-07 23:18:26.430720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.376 [2024-06-07 23:18:26.430726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.376 [2024-06-07 23:18:26.430735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.376 [2024-06-07 23:18:26.440687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.376 qpair failed and we were unable to recover it. 00:27:34.376 [2024-06-07 23:18:26.450684] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.376 [2024-06-07 23:18:26.450727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.376 [2024-06-07 23:18:26.450740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.376 [2024-06-07 23:18:26.450747] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.376 [2024-06-07 23:18:26.450752] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.376 [2024-06-07 23:18:26.460931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.376 qpair failed and we were unable to recover it. 00:27:34.376 [2024-06-07 23:18:26.470641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.376 [2024-06-07 23:18:26.470674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.376 [2024-06-07 23:18:26.470688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.376 [2024-06-07 23:18:26.470694] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.376 [2024-06-07 23:18:26.470700] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.376 [2024-06-07 23:18:26.480984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.376 qpair failed and we were unable to recover it. 00:27:34.376 [2024-06-07 23:18:26.490793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.376 [2024-06-07 23:18:26.490826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.376 [2024-06-07 23:18:26.490840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.376 [2024-06-07 23:18:26.490847] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.376 [2024-06-07 23:18:26.490853] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.501087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.510888] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.510924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.510938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.510944] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.510950] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.521116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.530833] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.530873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.530886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.530893] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.530898] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.541330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.550964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.551002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.551020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.551027] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.551033] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.561119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.571032] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.571066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.571080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.571087] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.571093] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.581400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.591155] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.591192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.591205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.591211] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.591217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.601452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.611090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.611132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.611149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.611156] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.611162] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.621441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.631164] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.631203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.631216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.631223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.631228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.377 [2024-06-07 23:18:26.641726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.377 qpair failed and we were unable to recover it. 00:27:34.377 [2024-06-07 23:18:26.651349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.377 [2024-06-07 23:18:26.651387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.377 [2024-06-07 23:18:26.651401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.377 [2024-06-07 23:18:26.651408] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.377 [2024-06-07 23:18:26.651413] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.634 [2024-06-07 23:18:26.661507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.634 qpair failed and we were unable to recover it. 00:27:34.634 [2024-06-07 23:18:26.671438] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.634 [2024-06-07 23:18:26.671476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.634 [2024-06-07 23:18:26.671490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.634 [2024-06-07 23:18:26.671497] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.634 [2024-06-07 23:18:26.671503] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.634 [2024-06-07 23:18:26.681685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.634 qpair failed and we were unable to recover it. 00:27:34.634 [2024-06-07 23:18:26.691458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.634 [2024-06-07 23:18:26.691497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.634 [2024-06-07 23:18:26.691510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.634 [2024-06-07 23:18:26.691516] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.634 [2024-06-07 23:18:26.691522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.634 [2024-06-07 23:18:26.701853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.634 qpair failed and we were unable to recover it. 00:27:34.634 [2024-06-07 23:18:26.711555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.634 [2024-06-07 23:18:26.711589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.634 [2024-06-07 23:18:26.711602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.634 [2024-06-07 23:18:26.711609] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.634 [2024-06-07 23:18:26.711615] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.634 [2024-06-07 23:18:26.721882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.634 qpair failed and we were unable to recover it. 00:27:34.634 [2024-06-07 23:18:26.731531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.634 [2024-06-07 23:18:26.731569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.634 [2024-06-07 23:18:26.731583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.731590] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.731596] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.741963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.751568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.751603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.751616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.751623] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.751629] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.762109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.771702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.771738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.771752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.771758] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.771764] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.782080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.791639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.791675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.791692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.791698] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.791704] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.802066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.811714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.811750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.811764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.811770] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.811776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.822053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.831844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.831882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.831897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.831903] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.831909] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.842093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.851729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.851766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.851780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.851786] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.851792] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.862176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.871879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.871918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.871932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.871938] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.871948] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.882345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.635 [2024-06-07 23:18:26.891908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.635 [2024-06-07 23:18:26.891944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.635 [2024-06-07 23:18:26.891957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.635 [2024-06-07 23:18:26.891963] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.635 [2024-06-07 23:18:26.891969] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.635 [2024-06-07 23:18:26.902296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.635 qpair failed and we were unable to recover it. 00:27:34.892 [2024-06-07 23:18:26.912087] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:26.912124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:26.912137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:26.912143] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:26.912149] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:26.922545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:26.932018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:26.932053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:26.932069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:26.932076] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:26.932081] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:26.942457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:26.952081] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:26.952114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:26.952128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:26.952134] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:26.952140] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:26.962646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:26.972159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:26.972195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:26.972209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:26.972216] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:26.972222] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:26.982498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:26.992167] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:26.992203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:26.992217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:26.992223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:26.992230] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.002571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.012280] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.012317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.012330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.012337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.012342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.022759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.032396] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.032433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.032448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.032454] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.032460] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.042738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.052431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.052470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.052486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.052493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.052499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.062828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.072490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.072525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.072538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.072544] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.072550] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.082879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.092512] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.092553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.092567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.092573] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.092579] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.103004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.112584] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.112621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.112635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.112641] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.112647] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.123059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.132692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.132729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.132743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.132750] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.132755] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.143097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:34.893 [2024-06-07 23:18:27.152774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.893 [2024-06-07 23:18:27.152809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.893 [2024-06-07 23:18:27.152822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.893 [2024-06-07 23:18:27.152828] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.893 [2024-06-07 23:18:27.152834] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:34.893 [2024-06-07 23:18:27.163082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.893 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.172784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.172820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.172834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.150 [2024-06-07 23:18:27.172840] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.150 [2024-06-07 23:18:27.172845] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.150 [2024-06-07 23:18:27.183244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.150 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.192852] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.192885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.192898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.150 [2024-06-07 23:18:27.192905] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.150 [2024-06-07 23:18:27.192911] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.150 [2024-06-07 23:18:27.203244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.150 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.212928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.212961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.212975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.150 [2024-06-07 23:18:27.212981] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.150 [2024-06-07 23:18:27.212987] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.150 [2024-06-07 23:18:27.223323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.150 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.232831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.232871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.232888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.150 [2024-06-07 23:18:27.232894] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.150 [2024-06-07 23:18:27.232900] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.150 [2024-06-07 23:18:27.243398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.150 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.253053] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.253096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.253109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.150 [2024-06-07 23:18:27.253115] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.150 [2024-06-07 23:18:27.253121] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.150 [2024-06-07 23:18:27.263615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.150 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.273082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.273116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.273129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.150 [2024-06-07 23:18:27.273136] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.150 [2024-06-07 23:18:27.273142] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.150 [2024-06-07 23:18:27.283515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.150 qpair failed and we were unable to recover it. 00:27:35.150 [2024-06-07 23:18:27.293146] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.150 [2024-06-07 23:18:27.293177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.150 [2024-06-07 23:18:27.293191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.293197] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.293203] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.303620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.151 [2024-06-07 23:18:27.313253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.151 [2024-06-07 23:18:27.313291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.151 [2024-06-07 23:18:27.313304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.313311] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.313320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.323631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.151 [2024-06-07 23:18:27.333254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.151 [2024-06-07 23:18:27.333291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.151 [2024-06-07 23:18:27.333305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.333311] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.333317] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.343787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.151 [2024-06-07 23:18:27.353324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.151 [2024-06-07 23:18:27.353357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.151 [2024-06-07 23:18:27.353370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.353377] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.353383] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.363789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.151 [2024-06-07 23:18:27.373331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.151 [2024-06-07 23:18:27.373363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.151 [2024-06-07 23:18:27.373376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.373382] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.373388] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.383882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.151 [2024-06-07 23:18:27.393506] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.151 [2024-06-07 23:18:27.393542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.151 [2024-06-07 23:18:27.393555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.393561] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.393567] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.404034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.151 [2024-06-07 23:18:27.413514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.151 [2024-06-07 23:18:27.413553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.151 [2024-06-07 23:18:27.413566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.151 [2024-06-07 23:18:27.413572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.151 [2024-06-07 23:18:27.413578] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.151 [2024-06-07 23:18:27.423999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.151 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.433569] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.433610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.433624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.433630] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.433636] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.444015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.453653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.453688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.453702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.453708] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.453714] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.464067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.473716] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.473753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.473766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.473772] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.473778] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.484128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.493686] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.493718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.493735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.493741] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.493747] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.504262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.513771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.513807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.513820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.513827] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.513833] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.524110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.533968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.534001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.534019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.534026] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.534032] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.544219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.553968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.408 [2024-06-07 23:18:27.554005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.408 [2024-06-07 23:18:27.554022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.408 [2024-06-07 23:18:27.554029] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.408 [2024-06-07 23:18:27.554035] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:35.408 [2024-06-07 23:18:27.564424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:35.408 qpair failed and we were unable to recover it. 00:27:35.408 [2024-06-07 23:18:27.564548] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:35.408 A controller has encountered a failure and is being reset. 00:27:35.408 [2024-06-07 23:18:27.564663] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:35.408 [2024-06-07 23:18:27.566716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:35.409 Controller properly reset. 00:27:36.339 Read completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.339 Write completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.339 Write completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.339 Read completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.339 Read completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.339 Read completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.339 Write completed with error (sct=0, sc=8) 00:27:36.339 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Write completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 Read completed with error (sct=0, sc=8) 00:27:36.340 starting I/O failed 00:27:36.340 [2024-06-07 23:18:28.579767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Read completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.712 Write completed with error (sct=0, sc=8) 00:27:37.712 starting I/O failed 00:27:37.713 Write completed with error (sct=0, sc=8) 00:27:37.713 starting I/O failed 00:27:37.713 Write completed with error (sct=0, sc=8) 00:27:37.713 starting I/O failed 00:27:37.713 Read completed with error (sct=0, sc=8) 00:27:37.713 starting I/O failed 00:27:37.713 Write completed with error (sct=0, sc=8) 00:27:37.713 starting I/O failed 00:27:37.713 [2024-06-07 23:18:29.592063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:37.713 Initializing NVMe Controllers 00:27:37.713 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.713 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.713 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:37.713 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:37.713 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:37.713 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:37.713 Initialization complete. Launching workers. 00:27:37.713 Starting thread on core 1 00:27:37.713 Starting thread on core 2 00:27:37.713 Starting thread on core 3 00:27:37.713 Starting thread on core 0 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:37.713 00:27:37.713 real 0m13.439s 00:27:37.713 user 0m27.548s 00:27:37.713 sys 0m2.465s 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.713 ************************************ 00:27:37.713 END TEST nvmf_target_disconnect_tc2 00:27:37.713 ************************************ 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:37.713 ************************************ 00:27:37.713 START TEST nvmf_target_disconnect_tc3 00:27:37.713 ************************************ 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc3 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1081034 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:27:37.713 23:18:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:27:37.713 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.611 23:18:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1079628 00:27:39.611 23:18:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Read completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.997 Write completed with error (sct=0, sc=8) 00:27:40.997 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Write completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Write completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Write completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Write completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Write completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Write completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 Read completed with error (sct=0, sc=8) 00:27:40.998 starting I/O failed 00:27:40.998 [2024-06-07 23:18:32.896453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.567 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1079628 Killed "${NVMF_APP[@]}" "$@" 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1081664 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1081664 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1081664 ']' 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:41.567 23:18:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:41.567 [2024-06-07 23:18:33.786426] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:27:41.567 [2024-06-07 23:18:33.786472] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.567 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.825 [2024-06-07 23:18:33.860921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:41.825 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Write completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 Read completed with error (sct=0, sc=8) 00:27:41.826 starting I/O failed 00:27:41.826 [2024-06-07 23:18:33.901478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:41.826 [2024-06-07 23:18:33.932408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.826 [2024-06-07 23:18:33.932447] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.826 [2024-06-07 23:18:33.932454] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.826 [2024-06-07 23:18:33.932460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.826 [2024-06-07 23:18:33.932464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.826 [2024-06-07 23:18:33.932579] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:27:41.826 [2024-06-07 23:18:33.932699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:27:41.826 [2024-06-07 23:18:33.932807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:27:41.826 [2024-06-07 23:18:33.932808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@863 -- # return 0 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 Malloc0 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.392 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.392 [2024-06-07 23:18:34.669052] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbfbb00/0xc07700) succeed. 00:27:42.650 [2024-06-07 23:18:34.678422] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbfd140/0xca7800) succeed. 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.650 [2024-06-07 23:18:34.820925] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.650 23:18:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1081034 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Write completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 Read completed with error (sct=0, sc=8) 00:27:42.650 starting I/O failed 00:27:42.650 [2024-06-07 23:18:34.906599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.650 [2024-06-07 23:18:34.908136] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:42.650 [2024-06-07 23:18:34.908154] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:42.650 [2024-06-07 23:18:34.908161] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:44.022 [2024-06-07 23:18:35.912128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.022 qpair failed and we were unable to recover it. 00:27:44.022 [2024-06-07 23:18:35.913496] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:44.022 [2024-06-07 23:18:35.913511] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:44.022 [2024-06-07 23:18:35.913517] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:44.955 [2024-06-07 23:18:36.917429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.955 qpair failed and we were unable to recover it. 00:27:44.955 [2024-06-07 23:18:36.918827] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:44.955 [2024-06-07 23:18:36.918842] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:44.955 [2024-06-07 23:18:36.918848] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:45.885 [2024-06-07 23:18:37.922661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:45.885 qpair failed and we were unable to recover it. 00:27:45.885 [2024-06-07 23:18:37.924030] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:45.885 [2024-06-07 23:18:37.924045] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:45.885 [2024-06-07 23:18:37.924052] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:46.817 [2024-06-07 23:18:38.927972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:46.817 qpair failed and we were unable to recover it. 00:27:46.817 [2024-06-07 23:18:38.929441] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:46.817 [2024-06-07 23:18:38.929456] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:46.817 [2024-06-07 23:18:38.929463] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:47.750 [2024-06-07 23:18:39.933366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:47.750 qpair failed and we were unable to recover it. 00:27:47.750 [2024-06-07 23:18:39.934819] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:47.750 [2024-06-07 23:18:39.934836] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:47.750 [2024-06-07 23:18:39.934842] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:48.683 [2024-06-07 23:18:40.938535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:48.683 qpair failed and we were unable to recover it. 00:27:48.683 [2024-06-07 23:18:40.939981] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:48.683 [2024-06-07 23:18:40.939997] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:48.683 [2024-06-07 23:18:40.940003] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:50.088 [2024-06-07 23:18:41.943958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:50.088 qpair failed and we were unable to recover it. 00:27:50.088 [2024-06-07 23:18:41.945677] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:50.088 [2024-06-07 23:18:41.945698] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:50.088 [2024-06-07 23:18:41.945705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:51.017 [2024-06-07 23:18:42.949492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:51.017 qpair failed and we were unable to recover it. 00:27:51.017 [2024-06-07 23:18:42.950909] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:51.017 [2024-06-07 23:18:42.950923] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:51.017 [2024-06-07 23:18:42.950929] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:51.947 [2024-06-07 23:18:43.954974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:51.947 qpair failed and we were unable to recover it. 00:27:51.947 [2024-06-07 23:18:43.955097] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:51.947 A controller has encountered a failure and is being reset. 00:27:51.947 Resorting to new failover address 192.168.100.9 00:27:51.947 [2024-06-07 23:18:43.956748] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:51.947 [2024-06-07 23:18:43.956776] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:51.947 [2024-06-07 23:18:43.956787] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:52.879 [2024-06-07 23:18:44.960757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-06-07 23:18:44.962344] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:52.879 [2024-06-07 23:18:44.962361] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:52.879 [2024-06-07 23:18:44.962367] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:53.810 [2024-06-07 23:18:45.966288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-06-07 23:18:45.966403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.810 [2024-06-07 23:18:45.966507] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:53.810 [2024-06-07 23:18:45.968565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:53.810 Controller properly reset. 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Write completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 Read completed with error (sct=0, sc=8) 00:27:54.741 starting I/O failed 00:27:54.741 [2024-06-07 23:18:47.013217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:54.999 Initializing NVMe Controllers 00:27:54.999 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.999 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.999 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:54.999 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:54.999 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:54.999 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:54.999 Initialization complete. Launching workers. 00:27:54.999 Starting thread on core 1 00:27:54.999 Starting thread on core 2 00:27:54.999 Starting thread on core 3 00:27:54.999 Starting thread on core 0 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:27:54.999 00:27:54.999 real 0m17.338s 00:27:54.999 user 1m4.721s 00:27:54.999 sys 0m3.925s 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.999 ************************************ 00:27:54.999 END TEST nvmf_target_disconnect_tc3 00:27:54.999 ************************************ 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:54.999 rmmod nvme_rdma 00:27:54.999 rmmod nvme_fabrics 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1081664 ']' 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1081664 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1081664 ']' 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1081664 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:27:54.999 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:55.000 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1081664 00:27:55.000 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:27:55.000 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:27:55.000 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1081664' 00:27:55.000 killing process with pid 1081664 00:27:55.000 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1081664 00:27:55.000 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1081664 00:27:55.257 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.257 23:18:47 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:55.257 00:27:55.257 real 0m38.522s 00:27:55.257 user 2m28.592s 00:27:55.257 sys 0m11.471s 00:27:55.257 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:55.257 23:18:47 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 ************************************ 00:27:55.257 END TEST nvmf_target_disconnect 00:27:55.257 ************************************ 00:27:55.257 23:18:47 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:27:55.257 23:18:47 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:55.257 23:18:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 23:18:47 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:55.257 00:27:55.257 real 20m57.747s 00:27:55.257 user 52m58.029s 00:27:55.257 sys 4m48.939s 00:27:55.257 23:18:47 nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:55.257 23:18:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 ************************************ 00:27:55.257 END TEST nvmf_rdma 00:27:55.257 ************************************ 00:27:55.517 23:18:47 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:55.517 23:18:47 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:55.517 23:18:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:55.517 23:18:47 -- common/autotest_common.sh@10 -- # set +x 00:27:55.517 ************************************ 00:27:55.517 START TEST spdkcli_nvmf_rdma 00:27:55.517 ************************************ 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:55.517 * Looking for test storage... 00:27:55.517 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1083936 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1083936 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@830 -- # '[' -z 1083936 ']' 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.517 23:18:47 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:55.517 [2024-06-07 23:18:47.707311] Starting SPDK v24.09-pre git sha1 86abcfbbd / DPDK 24.03.0 initialization... 00:27:55.517 [2024-06-07 23:18:47.707361] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083936 ] 00:27:55.517 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.517 [2024-06-07 23:18:47.767582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:55.776 [2024-06-07 23:18:47.848392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.776 [2024-06-07 23:18:47.848394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@863 -- # return 0 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:56.344 23:18:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:28:02.908 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:28:02.908 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:28:02.909 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:28:02.909 Found net devices under 0000:da:00.0: mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:28:02.909 Found net devices under 0000:da:00.1: mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:02.909 226: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:02.909 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:28:02.909 altname enp218s0f0np0 00:28:02.909 altname ens818f0np0 00:28:02.909 inet 192.168.100.8/24 scope global mlx_0_0 00:28:02.909 valid_lft forever preferred_lft forever 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:02.909 227: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:02.909 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:28:02.909 altname enp218s0f1np1 00:28:02.909 altname ens818f1np1 00:28:02.909 inet 192.168.100.9/24 scope global mlx_0_1 00:28:02.909 valid_lft forever preferred_lft forever 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:02.909 192.168.100.9' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:02.909 192.168.100.9' 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:28:02.909 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:02.910 192.168.100.9' 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:02.910 23:18:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:02.910 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:02.910 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:02.910 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:02.910 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:02.910 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:02.910 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:02.910 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:02.910 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:02.910 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:02.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:02.910 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:02.910 ' 00:28:05.443 [2024-06-07 23:18:57.118062] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9c2d40/0xb134c0) succeed. 00:28:05.443 [2024-06-07 23:18:57.130150] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9c4420/0x9d32c0) succeed. 00:28:06.378 [2024-06-07 23:18:58.360943] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:28:08.366 [2024-06-07 23:19:00.523775] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:28:10.269 [2024-06-07 23:19:02.381994] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:28:11.645 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:11.645 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:11.646 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:11.646 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:11.646 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:11.646 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:11.646 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:11.646 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:11.646 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:11.646 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:11.646 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:11.646 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:28:11.904 23:19:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:12.163 23:19:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:12.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:12.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:12.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:12.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:28:12.163 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:28:12.163 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:12.163 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:12.163 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:12.163 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:12.163 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:12.163 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:12.163 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:12.163 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:12.163 ' 00:28:17.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:17.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:17.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:17.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:17.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:28:17.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:28:17.431 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:17.431 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:17.431 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:17.431 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:17.431 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:17.431 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:17.431 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:17.431 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1083936 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@949 -- # '[' -z 1083936 ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # kill -0 1083936 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # uname 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1083936 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1083936' 00:28:17.431 killing process with pid 1083936 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # kill 1083936 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # wait 1083936 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:17.431 rmmod nvme_rdma 00:28:17.431 rmmod nvme_fabrics 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:17.431 00:28:17.431 real 0m22.101s 00:28:17.431 user 0m46.833s 00:28:17.431 sys 0m5.381s 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:17.431 23:19:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:17.431 ************************************ 00:28:17.431 END TEST spdkcli_nvmf_rdma 00:28:17.431 ************************************ 00:28:17.690 23:19:09 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:17.690 23:19:09 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:17.690 23:19:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:17.690 23:19:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:17.690 23:19:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:17.690 23:19:09 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:17.690 23:19:09 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:17.690 23:19:09 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:17.690 23:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:17.690 23:19:09 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:17.690 23:19:09 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:28:17.690 23:19:09 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:28:17.690 23:19:09 -- common/autotest_common.sh@10 -- # set +x 00:28:21.878 INFO: APP EXITING 00:28:21.878 INFO: killing all VMs 00:28:21.878 INFO: killing vhost app 00:28:21.878 INFO: EXIT DONE 00:28:25.164 Waiting for block devices as requested 00:28:25.164 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:28:25.164 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:25.164 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:25.164 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:25.164 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:25.164 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:25.164 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:25.422 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:25.422 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:25.422 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:25.422 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:25.680 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:25.680 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:25.680 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:25.938 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:25.938 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:25.938 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:29.227 Cleaning 00:28:29.227 Removing: /var/run/dpdk/spdk0/config 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:29.227 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:29.227 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:29.227 Removing: /var/run/dpdk/spdk1/config 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:29.227 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:29.227 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:29.227 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:29.227 Removing: /var/run/dpdk/spdk2/config 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:29.227 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:29.227 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:29.227 Removing: /var/run/dpdk/spdk3/config 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:29.227 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:29.227 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:29.227 Removing: /var/run/dpdk/spdk4/config 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:29.227 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:29.227 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:29.227 Removing: /dev/shm/bdevperf_trace.pid851453 00:28:29.227 Removing: /dev/shm/bdevperf_trace.pid997441 00:28:29.227 Removing: /dev/shm/bdev_svc_trace.1 00:28:29.227 Removing: /dev/shm/nvmf_trace.0 00:28:29.227 Removing: /dev/shm/spdk_tgt_trace.pid737544 00:28:29.227 Removing: /var/run/dpdk/spdk0 00:28:29.227 Removing: /var/run/dpdk/spdk1 00:28:29.227 Removing: /var/run/dpdk/spdk2 00:28:29.227 Removing: /var/run/dpdk/spdk3 00:28:29.227 Removing: /var/run/dpdk/spdk4 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1001670 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1009267 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1010189 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1011101 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1012019 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1012262 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1016980 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1016986 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1021532 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1022210 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1022677 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1023362 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1023411 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1028859 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1029436 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1033832 00:28:29.227 Removing: /var/run/dpdk/spdk_pid1036580 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1042311 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1052344 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1052346 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1071798 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1072031 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1078558 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1078939 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1081034 00:28:29.228 Removing: /var/run/dpdk/spdk_pid1083936 00:28:29.228 Removing: /var/run/dpdk/spdk_pid735204 00:28:29.228 Removing: /var/run/dpdk/spdk_pid736265 00:28:29.228 Removing: /var/run/dpdk/spdk_pid737544 00:28:29.228 Removing: /var/run/dpdk/spdk_pid738179 00:28:29.228 Removing: /var/run/dpdk/spdk_pid739121 00:28:29.228 Removing: /var/run/dpdk/spdk_pid739357 00:28:29.228 Removing: /var/run/dpdk/spdk_pid740337 00:28:29.228 Removing: /var/run/dpdk/spdk_pid740390 00:28:29.228 Removing: /var/run/dpdk/spdk_pid740682 00:28:29.228 Removing: /var/run/dpdk/spdk_pid745710 00:28:29.228 Removing: /var/run/dpdk/spdk_pid747206 00:28:29.228 Removing: /var/run/dpdk/spdk_pid747486 00:28:29.228 Removing: /var/run/dpdk/spdk_pid747773 00:28:29.228 Removing: /var/run/dpdk/spdk_pid748074 00:28:29.228 Removing: /var/run/dpdk/spdk_pid748371 00:28:29.487 Removing: /var/run/dpdk/spdk_pid748621 00:28:29.487 Removing: /var/run/dpdk/spdk_pid748869 00:28:29.487 Removing: /var/run/dpdk/spdk_pid749149 00:28:29.487 Removing: /var/run/dpdk/spdk_pid750107 00:28:29.487 Removing: /var/run/dpdk/spdk_pid752992 00:28:29.487 Removing: /var/run/dpdk/spdk_pid753363 00:28:29.487 Removing: /var/run/dpdk/spdk_pid753626 00:28:29.487 Removing: /var/run/dpdk/spdk_pid753655 00:28:29.487 Removing: /var/run/dpdk/spdk_pid754247 00:28:29.487 Removing: /var/run/dpdk/spdk_pid754485 00:28:29.487 Removing: /var/run/dpdk/spdk_pid754762 00:28:29.487 Removing: /var/run/dpdk/spdk_pid754990 00:28:29.487 Removing: /var/run/dpdk/spdk_pid755248 00:28:29.487 Removing: /var/run/dpdk/spdk_pid755607 00:28:29.487 Removing: /var/run/dpdk/spdk_pid755965 00:28:29.487 Removing: /var/run/dpdk/spdk_pid756143 00:28:29.487 Removing: /var/run/dpdk/spdk_pid756694 00:28:29.487 Removing: /var/run/dpdk/spdk_pid756942 00:28:29.487 Removing: /var/run/dpdk/spdk_pid757235 00:28:29.487 Removing: /var/run/dpdk/spdk_pid757499 00:28:29.487 Removing: /var/run/dpdk/spdk_pid757525 00:28:29.487 Removing: /var/run/dpdk/spdk_pid757647 00:28:29.487 Removing: /var/run/dpdk/spdk_pid757918 00:28:29.487 Removing: /var/run/dpdk/spdk_pid758184 00:28:29.487 Removing: /var/run/dpdk/spdk_pid758445 00:28:29.487 Removing: /var/run/dpdk/spdk_pid758712 00:28:29.487 Removing: /var/run/dpdk/spdk_pid758971 00:28:29.487 Removing: /var/run/dpdk/spdk_pid759236 00:28:29.487 Removing: /var/run/dpdk/spdk_pid759534 00:28:29.487 Removing: /var/run/dpdk/spdk_pid759804 00:28:29.487 Removing: /var/run/dpdk/spdk_pid760054 00:28:29.487 Removing: /var/run/dpdk/spdk_pid760304 00:28:29.487 Removing: /var/run/dpdk/spdk_pid760554 00:28:29.487 Removing: /var/run/dpdk/spdk_pid760806 00:28:29.487 Removing: /var/run/dpdk/spdk_pid761053 00:28:29.487 Removing: /var/run/dpdk/spdk_pid761305 00:28:29.487 Removing: /var/run/dpdk/spdk_pid761558 00:28:29.487 Removing: /var/run/dpdk/spdk_pid761804 00:28:29.487 Removing: /var/run/dpdk/spdk_pid762062 00:28:29.487 Removing: /var/run/dpdk/spdk_pid762311 00:28:29.487 Removing: /var/run/dpdk/spdk_pid762558 00:28:29.487 Removing: /var/run/dpdk/spdk_pid762812 00:28:29.487 Removing: /var/run/dpdk/spdk_pid762879 00:28:29.487 Removing: /var/run/dpdk/spdk_pid763296 00:28:29.487 Removing: /var/run/dpdk/spdk_pid767363 00:28:29.487 Removing: /var/run/dpdk/spdk_pid811437 00:28:29.487 Removing: /var/run/dpdk/spdk_pid815736 00:28:29.487 Removing: /var/run/dpdk/spdk_pid826230 00:28:29.487 Removing: /var/run/dpdk/spdk_pid831619 00:28:29.487 Removing: /var/run/dpdk/spdk_pid835514 00:28:29.487 Removing: /var/run/dpdk/spdk_pid836209 00:28:29.487 Removing: /var/run/dpdk/spdk_pid851453 00:28:29.487 Removing: /var/run/dpdk/spdk_pid851828 00:28:29.487 Removing: /var/run/dpdk/spdk_pid856131 00:28:29.487 Removing: /var/run/dpdk/spdk_pid862289 00:28:29.487 Removing: /var/run/dpdk/spdk_pid865378 00:28:29.487 Removing: /var/run/dpdk/spdk_pid875695 00:28:29.487 Removing: /var/run/dpdk/spdk_pid901014 00:28:29.487 Removing: /var/run/dpdk/spdk_pid904837 00:28:29.487 Removing: /var/run/dpdk/spdk_pid952322 00:28:29.487 Removing: /var/run/dpdk/spdk_pid967730 00:28:29.487 Removing: /var/run/dpdk/spdk_pid995510 00:28:29.487 Removing: /var/run/dpdk/spdk_pid996352 00:28:29.487 Removing: /var/run/dpdk/spdk_pid997441 00:28:29.487 Clean 00:28:29.746 23:19:21 -- common/autotest_common.sh@1450 -- # return 0 00:28:29.746 23:19:21 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:29.746 23:19:21 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:29.746 23:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:29.746 23:19:21 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:29.746 23:19:21 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:29.746 23:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:29.746 23:19:21 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:29.746 23:19:21 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:28:29.746 23:19:21 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:28:29.746 23:19:21 -- spdk/autotest.sh@391 -- # hash lcov 00:28:29.746 23:19:21 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:29.746 23:19:21 -- spdk/autotest.sh@393 -- # hostname 00:28:29.746 23:19:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:28:30.005 geninfo: WARNING: invalid characters removed from testname! 00:28:48.101 23:19:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:50.634 23:19:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:52.539 23:19:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:53.915 23:19:45 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:55.816 23:19:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:57.192 23:19:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:59.093 23:19:50 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:59.093 23:19:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:59.093 23:19:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:59.093 23:19:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.093 23:19:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.093 23:19:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.093 23:19:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.093 23:19:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.093 23:19:51 -- paths/export.sh@5 -- $ export PATH 00:28:59.093 23:19:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.094 23:19:51 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:28:59.094 23:19:51 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:59.094 23:19:51 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717795191.XXXXXX 00:28:59.094 23:19:51 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717795191.ywDPBB 00:28:59.094 23:19:51 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:59.094 23:19:51 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:28:59.094 23:19:51 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:28:59.094 23:19:51 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:59.094 23:19:51 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:59.094 23:19:51 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:59.094 23:19:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:59.094 23:19:51 -- common/autotest_common.sh@10 -- $ set +x 00:28:59.094 23:19:51 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:28:59.094 23:19:51 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:59.094 23:19:51 -- pm/common@17 -- $ local monitor 00:28:59.094 23:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.094 23:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.094 23:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.094 23:19:51 -- pm/common@21 -- $ date +%s 00:28:59.094 23:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.094 23:19:51 -- pm/common@21 -- $ date +%s 00:28:59.094 23:19:51 -- pm/common@25 -- $ sleep 1 00:28:59.094 23:19:51 -- pm/common@21 -- $ date +%s 00:28:59.094 23:19:51 -- pm/common@21 -- $ date +%s 00:28:59.094 23:19:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717795191 00:28:59.094 23:19:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717795191 00:28:59.094 23:19:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717795191 00:28:59.094 23:19:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717795191 00:28:59.094 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717795191_collect-vmstat.pm.log 00:28:59.094 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717795191_collect-cpu-load.pm.log 00:28:59.094 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717795191_collect-cpu-temp.pm.log 00:28:59.094 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717795191_collect-bmc-pm.bmc.pm.log 00:29:00.146 23:19:52 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:00.146 23:19:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:29:00.146 23:19:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:00.146 23:19:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:00.146 23:19:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:00.146 23:19:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:00.146 23:19:52 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:00.146 23:19:52 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:00.146 23:19:52 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:00.146 23:19:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:00.146 23:19:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:00.146 23:19:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:00.146 23:19:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:00.146 23:19:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.146 23:19:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:00.146 23:19:52 -- pm/common@44 -- $ pid=1099207 00:29:00.146 23:19:52 -- pm/common@50 -- $ kill -TERM 1099207 00:29:00.146 23:19:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.146 23:19:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:00.146 23:19:52 -- pm/common@44 -- $ pid=1099209 00:29:00.146 23:19:52 -- pm/common@50 -- $ kill -TERM 1099209 00:29:00.146 23:19:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.146 23:19:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:00.146 23:19:52 -- pm/common@44 -- $ pid=1099211 00:29:00.146 23:19:52 -- pm/common@50 -- $ kill -TERM 1099211 00:29:00.146 23:19:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.146 23:19:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:00.146 23:19:52 -- pm/common@44 -- $ pid=1099236 00:29:00.146 23:19:52 -- pm/common@50 -- $ sudo -E kill -TERM 1099236 00:29:00.146 + [[ -n 628419 ]] 00:29:00.146 + sudo kill 628419 00:29:00.163 [Pipeline] } 00:29:00.183 [Pipeline] // stage 00:29:00.189 [Pipeline] } 00:29:00.206 [Pipeline] // timeout 00:29:00.211 [Pipeline] } 00:29:00.228 [Pipeline] // catchError 00:29:00.233 [Pipeline] } 00:29:00.252 [Pipeline] // wrap 00:29:00.258 [Pipeline] } 00:29:00.272 [Pipeline] // catchError 00:29:00.280 [Pipeline] stage 00:29:00.283 [Pipeline] { (Epilogue) 00:29:00.297 [Pipeline] catchError 00:29:00.299 [Pipeline] { 00:29:00.314 [Pipeline] echo 00:29:00.316 Cleanup processes 00:29:00.323 [Pipeline] sh 00:29:00.608 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:00.608 1099314 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:29:00.608 1099613 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:00.622 [Pipeline] sh 00:29:00.907 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:00.908 ++ grep -v 'sudo pgrep' 00:29:00.908 ++ awk '{print $1}' 00:29:00.908 + sudo kill -9 1099314 00:29:00.941 [Pipeline] sh 00:29:01.215 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:09.340 [Pipeline] sh 00:29:09.622 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:09.622 Artifacts sizes are good 00:29:09.638 [Pipeline] archiveArtifacts 00:29:09.645 Archiving artifacts 00:29:09.799 [Pipeline] sh 00:29:10.082 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:29:10.096 [Pipeline] cleanWs 00:29:10.106 [WS-CLEANUP] Deleting project workspace... 00:29:10.106 [WS-CLEANUP] Deferred wipeout is used... 00:29:10.112 [WS-CLEANUP] done 00:29:10.114 [Pipeline] } 00:29:10.134 [Pipeline] // catchError 00:29:10.145 [Pipeline] sh 00:29:10.422 + logger -p user.info -t JENKINS-CI 00:29:10.430 [Pipeline] } 00:29:10.446 [Pipeline] // stage 00:29:10.451 [Pipeline] } 00:29:10.469 [Pipeline] // node 00:29:10.475 [Pipeline] End of Pipeline 00:29:10.514 Finished: SUCCESS